Higher-Order Defeat and Intellectual Responsibility: Ru Ye

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Synthese

https://doi.org/10.1007/s11229-018-01972-2

Higher-order defeat and intellectual responsibility

Ru Ye1

Received: 18 July 2017 / Accepted: 29 September 2018


© Springer Nature B.V. 2018

Abstract
It’s widely accepted that higher-order defeaters, i.e., evidence that one’s belief is
formed in an epistemically defective way, can defeat doxastic justification. However,
it’s yet unclear how exactly such kind of defeat happens. Given that many theories
of doxastic justification can be understood as fitting the schema of proper basing on
propositional justifiers, we might attempt to explain the defeat either by arguing that
a higher-order defeater defeats propositional justification or by arguing that it defeats
proper basing. It has been argued that the first attempt is unpromising because a variety
of prominent theories of propositional justification don’t imply that we lose proposi-
tional justification when gaining higher-order defeaters. This leads some scholars to
take the second attempt. In this paper, I criticize this second attempt, and I defend
the first attempt by arguing that a theory of propositional justification that requires
intellectual responsibility can nicely account for higher-order defeat. My proposal is
that we lose doxastic justification when we gain higher-order defeaters because there
is no intellectually responsible way for us to maintain our original beliefs due to the
defeaters.

Keywords Higher-order evidence · Intellectual responsibility · Defeat · Justification

1 Introduction

Recently, a number of scholars have been attracted to the phenomenon of higher-order


defeat on doxastic justification. This is defeat caused by the so-called ‘higher-order
evidence.’ Roughly, the kind of higher-order evidence that allegedly have defeating
power are not evidence directly about the content of one’s belief but evidence that
one’s belief is formed in an epistemically defective way. For example, it might be evi-
dence that your belief in a mathematical proposition is a result of reasoning-damaging

B Ru Ye
[email protected]

1 School of Philosophy, Wuhan University, Wuhan 430072, Hubei Province, People’s Republic of
China

123
Synthese

drug, or evidence that some of your perceptual belief is a result of hallucination. It’s
widely accepted that, when you gain such kind of evidence, your belief is no longer
doxastically justified. (Hereafter, I will call the higher-order evidence that allegedly
has defeating power ‘HOD,’ and I will use ‘higher-order defeat’ to refer to the alleged
phenomenon of defeat resulted by gaining HOD.)1
Assuming that gaining HOD does defeat doxastic justification, we might wonder
why exactly the defeat happens. Given that many theories of doxastic justification can
be understood as fitting the schema of proper basing on propositional justifiers (see
Turri 2010), we might attempt to explain higher-order defeat either by arguing that
HOD defeats propositional justification or by arguing that it defeats proper basing. It
has been argued that the former attempt is unpromising because a variety of prominent
theories of propositional justification cannot account for higher-order defeat (Chris-
tensen 2010; Lasonen-Aarnio 2014). This leads some scholars to take the second
attempt. (See Smithies 2015 and van Wietmarschen 2013.)
In this paper, I criticize this second attempt, and I defend the first attempt by argu-
ing that a theory of propositional justification that requires intellectual responsibility
can nicely explain why exactly HOD defeats justification. My proposal is that HOD
defeats doxastic justification by defeating propositional justification, and it defeats
propositional justification because there is in principle no intellectually responsible
way to maintain the original beliefs due to the presence of HOD.
The question of why higher-order defeat happens is important. First, the answer
will substantially constrain our theories of justification. For why exactly a belief is
defeated speaks a lot about why exactly a belief is justified. A theory of justification
would be defective if it cannot explain why an important kind of defeat happens.
So, if my argument were successful, it would give us a strong reason to believe that
justification requires intellectual responsibility. And since the responsibility condition
is most amenable to deontological theories and virtue theories of justification, my
discussion will provide a strong reason to move towards these two kinds of theories.
Second, seeing why higher-order defeat happens will help explain what makes the
so-called ‘level-bridging principle’ true. Roughly, the principle bans holding a belief
while simultaneously believing that it is epistemically defective.2 The principle has
recently been recognized for its important role in several scholars’ arguments for some
important conclusions about epistemic rationality or justification (hereafter, I will use
‘rationality’ and ‘justification’ interchangeably).3 However, although the principle is
intuitive and although violating it brings unwelcome commitments (Horowitz 2014),
there has been little work devoted to grounding the principle, namely, to explaining
1 Prominent proponents of the reality of higher-order defeat include: Christensen (2007a, 2007b, 2010),
Elga (2013), Feldman (2005), Foley (2001), Huemer (2011) and Kelly (2010). However, although the
phenomenon of higher-order defeat is widely noted, its reality is not uncontroversial. Deniers include:
Coates (2012), Lasonon-Aarnio (2010, 2014) and Williamson (2011).
2 All defenders of higher-order defeat listed in fn. 1 also accept level-bridging. Besides, Broome (2013),
Greco (2014), Horowitz (2014), Ichikawa and Jarvis (2013), Smithies (2012), Titelbaum (2015) and Ye
(2015) have defended level-bridging. And the deniers listed in fn. 1 also reject level-bridging.
3 For example, the principle is crucial in: Foley’s (1990) argument that it’s impossible to give a sufficient
condition of rationality, Christensen’s argument (2007a, 2010) for epistemic dilemmas, Littlejohn’s (2015)
argument against evidentialism, and Worsnip’s (2018) argument that epistemic rationality is not about
believing what’s supported by evidence but about maintaining coherence.

123
Synthese

what makes the principle true. This is unsatisfying. (Compare: skepticism is unintuitive
and it brings unwelcome commitments, but it would be good if we can positively
explain what makes the skeptic’s argument wrong.) I believe that a positive story
about why higher-order defeat happens will provide such a ground: we can explain
what makes the level-bridging principle true by explaining exactly why evidence for
the higher-order belief defeats the first-order belief.
My paper proceeds in the following way. In Sect. 2, I clarify the notion of HOD.
In Sect. 3, I criticize the proposal that HOD makes our beliefs unjustified not by
defeating propositional justification but by making our beliefs no longer properly
based on propositional justifiers. In Sect. 4, I propose that HOD defeats propositional
justification, and I support this proposal by explaining how including intellectual
responsibility as a required condition of propositional justification can nicely account
for higher-order defeat. Section 5 concludes the paper.

2 What is a HOD?

As above-mentioned, HOD is evidence that one’s belief is formed in an epistemically


defective way. To see what exactly the defect involves, consider the following paradigm
examples of HOD discussed in the literature on higher-order evidence.
Drug
I am a student in a logic class. I believe that I have just solved the logical puzzle
given by the professor. But then I receive evidence that the coffee I just had
was slipped some drug that undetectably harms one’s logical reasoning ability.
(Christensen 2010, p. 187)
Sleep deprivation
A doctor just made a diagnosis for a patient based on the symptoms he observes.
But then he is reminded that he has been awake for 36 h. (Christensen 2010,
p. 186)
Hypoxia
A pilot is considering whether he has enough fuel to make it to Hawaii. Based
on his past experience and his calculation of how much fuel is needed, the pilot
believes that he can make it to Hawaii. But then he gets evidence that he is in
a state of hypoxia, a condition that often undetectably harms pilots’ reasoning.
(Lasonen-Aarnio 2014, p. 315)
How should we characterize the HOD in these examples? There are two answers in
the literature. The first one says that it’s evidence that one’s belief is irrational. For
example, Christensen (2010, p. 185) characterizes HOD as evidence that one’s belief is
‘rationally sub-par’ or evidence of one’s ‘rational failure.’ (Also see Lasonen-Aarnio
2014, pp. 315–316.) The second characterization describes HOD not as evidence of
irrationality but as evidence of unreliability, namely, evidence that one is unlikely to
reach a true belief in the current situation (Christensen 2016, p. 397.)
Which characterization we should choose depends on what theories of rationality
we hold. If rationality is essentially the same thing as reliability, as reliabilists tend to

123
Synthese

think, then evidence of irrationality is the same thing with evidence of unreliability.
And thus it doesn’t make much difference which characterization we choose. But
if rationality is essentially a matter of conforming to what one’s evidence supports,
as evidentialists tend to think, then evidence of unreliability will be broader than
evidence of irrationality. To the extent that irrational beliefs are often unlikely to be
true, evidence of irrationality will often also be evidence of unreliability. But evidence
of unreliability is not limited to evidence of irrationality. In this case, characterizing
HOD as evidence of irrationality will be too restrictive because it will leave out many
instances of defeaters that intuitively should be classified as HOD. For example, the
doctor in Sleep Deprivation might gain evidence that, although his diagnosis is rational
because it’s supported by the actual evidence he possesses, his lack of sleep makes him
unreliable in noticing crucial symptoms of the patient. Such defects in collecting or
identifying evidence will also make the doctor unlikely to give a correct diagnosis—It’s
not an uncommon phenomenon that, although a doctor correctly assesses what his
current observation supports, he fails to give a correct diagnosis because he fails to
notice a crucial symptom. So, when the doctor gets evidence that he is unable to notice
certain crucial symptoms, his belief might be defeated even if it’s not evidence that he
is unable to correctly assess the probative force of his evidence.4 That is, in this case,
the doctor’s evidence of sleep deprivation is evidence of unreliability, even if it’s not
evidence that his belief is irrational (in the evidentialist sense of irrationality). So, if
we want to allow that an evidentialist theory of rationality might be correct, it’s better
to characterize HOD as the broader notion of evidence of unreliability.
This characterization enables us to see how HOD connects to the other two types
of defeaters that we are familiar with: rebutting defeaters and undercutting defeaters.
Suppose one justifiably believes p based on one’s evidence E. A rebutting defeater
is evidence attacking the content of the belief: it’s evidence that p is false. And an
undercutting defeater is evidence attacking the evidential connection: it’s evidence
that E doesn’t support p. (See Pollock and Cruz (1999, p. 196).) These two types of
defeaters relate to HOD in the following way. (1) Rebutting defeaters are a relatively
weak kind of HOD. This is because evidence that p is false is also evidence that you’ve
reached a false belief about p and thus some weak evidence of your unreliability; (2)
Undercutting defeaters are also some weak kind of HOD: they are evidence that you’ve
misjudged the evidential connection between E and p and thus some (relatively weak)
evidence of your unreliability in reaching a true belief about p. But in those cases

4 One might doubt whether this is possible by reasoning this way: If one correctly judges that one’s evidence
E supports one’s belief p, then evidence that one is unable to notice potential disconfirming evidence is not a
defeater, because if E supports p, it would also support that the potential disconfirming evidence is unlikely
to obtain (that is, if P(p/E) is high and yet P(p/E&e) is low, then P(e/E) must be low). Therefore, evidence
of deficient ability in collecting evidence is not a defeater, because one could be confident that a piece of
disconfirming evidence would be unlikely to obtain even if one looks for it with good evidence-collecting
ability.
In reply, I think the above reasoning is correct in general, but not in those cases where P(p/E) is just a
little higher than the threshold for rational belief and P(p/E&e) is just a little lower than that threshold. In
such cases, P(e/E) needs not be below the threshold. That said, I should note that the possibility of such
cases rests on the controversial assumption that there is a sharp threshold on how probable a proposition
must be in order for one to rationally believe it. Thanks to a reviewer for raising the above objection and
for pressing me to think about this issue.

123
Synthese

where E is your only evidence about p, undercutting defeaters will also be strong
HOD. For in such cases, evidence that E doesn’t support p is evidence that you have
no good evidence about p to rely on, and thus it’s evidence that you are unlikely to
reach a true belief about p.
Despite these close connections, HOD still differs sharply from the two familiar
types of defeaters to deserve the recent booming interest in it. To see this, I should first
point out that there are ‘pure’ HOD—HOD that are neither rebutters nor undercutters,
or more exactly, HOD that are strong enough to require you to suspend judgment and
yet the element of rebutting or undercutting is at best very weak. Imagine that you are
a student working on whether a mathematical proposition holds. One day you think
that you have constructed a proof of the proposition from some axioms. You have
checked the proof many times and it is in fact correct. But then your professor tells
you it is incorrect, without telling you whether the proposition is false or whether
the axioms you appeal to do entail the proposition. Particularly, he says, “correctly
proving the proposition or proving its negation requires certain highly sophisticated
skill that is far beyond your current mathematical capacity, so whatever ‘proof’ you
think you’ve got is probably wrong.” In this case, you get strong evidence that your
proof is wrong, and this evidence is not strong evidence that the entailment is not
there, since you should think that your giving a wrong proof is better explained by
your lacking the crucial skill than by the absence of the entailment. In this case, you
get a strong HOD—strong enough to require you to suspend judgment—but at best
a weak rebutter and undercutter. Evidence that your proof is incorrect is only weak
evidence that the proposition is not entailed by the axioms (perhaps there are tons of
other proofs) and even weaker evidence that the proposition is not true.5
So, we do have pure HOD. It’s evidence that you are unlikely to reach a true belief
about the proposition in question due to drug, hallucination, or some other condition
that impairs cognitive abilities, without being evidence about whether the proposition
is true or whether it’s supported by your original evidence. The bottom line is that
pure HOD is evidence about your performance in the process of reaching an attitude
about the proposition in question. That you are unlikely to reach a true belief about the
proposition or about the evidential connection in question tells us little about whether
the proposition is true or whether the evidential connection is there.6 Hereafter, when I
talk about HOD, I refer to pure HOD unless otherwise stated. (I do this mainly because
of theoretical interests. It’s interesting to see how pure HOD works, given that HOD
is introduced into the literature as a kind of defeaters differing from the traditional
rebutting or undercutting defeaters. I don’t assume that pure HOD must work in a
different way from rebutters or undercutters.)

5 In explaining how HOD differs from undercutting evidence, Christensen (2010, pp. 194–195) argues that
the former often leaves the evidential support intact. It seems that this is not a good way to characterize the
difference. For if one’s original evidence E supports p, undercutting evidence that E doesn’t support p also
doesn’t need to make it the case that E no longer supports p (Worsnip 2018, pp. 21–30).
6 The reverse might not be true: that you are quite capable in assessing the evidential connection can tell
us something about the connection when we learn what your judgment about the connection is.

123
Synthese

3 The proper basing proposal fails

3.1 Motivation for the proper-basing proposal

It’s common nowadays to distinguish propositional justification from doxastic jus-


tification. The distinction corresponds to the distinction between saying ‘one has
justification to believe that p’ and saying ‘one’s belief that p is justified’ (or ‘one is
justified in believing that p’). According to an orthodox schema, doxastic justification
is propositional justification plus proper basing. (For a list of prominent epistemol-
ogists who explicitly endorse this schema, see the references given in Turri 2010,
pp. 3–5.)7 One has propositional justification to believe a proposition p when one has
good reason to believe that p. But to have doxastic justification in believing that p,
having good reasons is not enough; one’s belief must also be properly based on those
good reasons. Given this schema, we might wonder whether HOD defeats doxastic
justification by defeating propositional justification or by defeating proper basing.
Since the introduction of the notion of HOD, several scholars have argued that, for
a broad variety of theories of propositional justification, it’s hard to see how higher-
order defeat happens. The core idea of their argument is that gaining HOD wouldn’t
automatically imply that one’s belief doesn’t fit one’s total evidence, or that there is
no reliable process that would produce the belief if operated, or that there is no good
epistemic rule recommending holding the belief, etc.
For instance, Christensen (2010, pp. 195–197) has argued that the evidentialist
theory of propositional justification—according to which to have propositional justifi-
cation for one’s belief p is for the belief to be supported by one’s total evidence—cannot
account for higher-order defeat.8 No matter whether one’s total evidence supports p
when it makes p probable or when it reliably indicates p, HOD leaves the evidential
connection intact (remember that I am talking about pure HOD, which is evidence
solely about your reliability, without being evidence about p or about the evidential
connection.) That is, if your original total evidence E supports p before you gain HOD,
E still supports p when you gain HOD. And since HOD has nothing to do with whether
p is true, that your original total evidence E still supports p seems to imply that your
current total evidence E&HOD also supports p. So, evidentialism predicts that HOD
doesn’t defeat propositional justification.
Reliabilist theories of propositional justification fares no better in accounting for
HOD. Consider Lasonen-Aarnio’s argument (2014, pp. 325–326). Suppose that, in a
case like Drug, my belief in the solution of the puzzle is originally produced by an
infallible logical faculty that I possess. When I get the drug evidence, it seems that

7 Note that this schema is neutral on which kind of justification is more fundamental. For instance, a
reliabilist might think that doxastic justification is more fundamental because propositional justification
can be defined in terms of availability of belief-forming process that would produce doxastically justified
beliefs if operated. (Goldman (1979) defines ‘ex ante justification’ this way, which is a notion close to
‘propositional justification.’) This definition would still imply that doxastic justification is propositional
justification plus proper basing, with the basing condition understood in this way: one’s belief is based on
a process if it’s produced by the process.
8 van Wietmarschen (2013, pp. 401–409) has argued that the HOD provided by peer disagreement doesn’t
undermine evidential support when one’s original reasoning is close to ideal reasoning.

123
Synthese

this 100% reliable logical faculty would still be available. So I still have propositional
justification to believe the theorem. And even if a reliabilist adds a condition like ‘there
must be no alternative, equally reliable process available to one that would not result
in a belief in p,’ the difficulty persists: It seems that no alternative process by which I
might come to give up believing in the theorem due to HOD would be 100% reliable.
Given that these prominent theories of propositional justification fail to predict
that one no longer has propositional justification when gaining HOD, it’s tempting to
suspect that HOD defeats doxastic justification not by defeating propositional justifi-
cation but by defeating proper-basing. The suspicion is that HOD makes one’s belief
doxastically unjustified by making it no longer properly based on one’s propositional
justifiers. Let’s call this ‘the proper basing proposal’ or ‘PB’ for short.
PB sounds plausible. It’s tempting to think that, if you get evidence that you are
unreliable in assessing the evidential connection, your belief is no longer justified
because somehow you should no longer ‘rely on’ your original evidence, even if
the evidence still supports your belief. In fact, Christensen (2010) comes close to
endorsing this proposal when he suggests that HOD defeats justification by requiring
one to ‘bracket’ one’s original evidence. Although he doesn’t explain clearly what
‘bracketing’ is, it’s not quite a stretch to say that to bracket one’s evidence is to ‘not
rely on’ one’s evidence or ‘not base’ one’s belief on the evidence. In what follows, I
explore how PB might be developed and I argue that it faces severe difficulties.

3.2 Basing versus proper basing: two versions of PB

Before I get into PB’s difficulties, I want to clarify what exactly the PB theorist’s
task is. What he must explain is not only how gaining HOD makes one’s belief in
fact improperly based, but also how the HOD makes it impossible for one’s belief to
be properly based—impossible given one’s epistemic situation. For if the evidence
merely makes one’s belief improperly based as a matter of fact without making proper
basing impossible, then such evidence shouldn’t require one to give up one’s belief ;
instead, what one should do is to continue to hold that belief but revise the way in which
the belief is based. But this is not how the defeating effect of HOD is taken. Those
who take higher-order defeat seriously think that we should really suspend judgment
upon getting such HOD, not just to rebase our original belief.9
Now that we are clearer on PB’s task, we can distinguish two versions of PB.
The first version focuses on the ‘basing’ part: It says that HOD defeats justification
by making one’s belief no longer based on good reasons, even if one still has good
reasons. The second version focuses on the ‘proper’ part: It says that HOD makes
one’s belief no longer properly based on good reasons.
The first version is clearly implausible. Apparently, how one’s belief is based is a
purely psychological process, a process about how the belief is formed or maintained,
and gaining a piece of evidence needs not change this psychological process. No matter

9 For example, Christensen (2010) argues that one should suspend judgment in those paradigm examples
of HOD. Besides, both defenders and deniers of the level-bridging principle (as is listed in fn. 2) think that,
if higher-order evidence has defeating power, it requires giving up the belief and not just rebasing it. See,
for example, Lasonen-Aarnio (2014).

123
Synthese

what evidence one gains, it’s entirely possible for one to hold one’s belief on the same
basis. It’s entirely possible for one to ignore the new evidence, acting as if one has
never gained the evidence in the first place.
This verdict is confirmed by major theories of basing in play. (For good overviews
of basing, see Korcz (2010) and Sylvan (2016).) Consider the doxastic theory first,
which roughly says that one’s belief that p is based on a reason R just when one has
some meta-belief to the effect that R supports p. This theory predicts that HOD needn’t
make my belief no longer based on good reasons—if I initially have a meta-belief to
the effect that R supports p, I could still hold the belief when gaining HOD.
Or consider the causal theory of basing, which says that one’s belief that p is based
on a reason R if this belief is non-deviantly causally sustained by R. Since gaining
HOD needn’t change the causal sustaining chain of my belief that p, if the belief is
originally non-deviantly sustained by R, it could remain so and thus it could still be
based on R.
In sum, gaining HOD doesn’t need to make my belief no longer based on good
reasons. So, the first version of PB fails. Later on, when I refer to PB, I mean the
second version of it, which says that HOD makes my belief no longer properly based.

3.3 What is proper basing?

First, we need to get clear on the notion of proper basing. When one believes that p
based on evidence E, what makes the basing proper?
To understand what proper basing is, a short review of how the term ‘proper basing’
comes into view is in order. Originally, the orthodox schema about how doxastic
justification relates to propositional justification is framed into the claim that one has
the former when one’s belief is based on the latter. (Or even if there is occasional
mention of ‘proper basing,’ when talking about what turns propositional justification
into doxastic justification, the basing relation is often emphasized while the term
‘proper’ is often ignored. Again, see Turri 2010, pp. 2–5 for helpful references). The
additional requirement of proper basing is introduced or emphasized by proponents of
the schema mainly as a reaction to Turri (2010). In that paper, Turri argues against the
orthodox schema by raising some counterexamples. In the first example, a detective
comes to believe that John is the murderer by inferring it from a body of excellent
evidence (fingerprints, witnesses’ testimonies, etc.). So, his belief is based on the
evidence. However, he only infers that John is the murderer from his evidence because
he consults some tealeaf reading that says that the evidence makes it highly likely
that John is the murderer. Clearly, the detective’s belief is not doxastically justified,
although it’s indeed based on the propositional justifiers. In the second counterexample,
the subject comes to believe that (P3) <The Spurs will win> by inferring it from his
knowledge that (P1) <The Spurs will win if they play the Pistons> and knowledge that
(P2) <The Spurs play the Pistons>. So, the subject’s belief P3 is indeed based on his
propositional justifiers. However, the subject infers P3 not by doing a modus ponens
but by relying on this rule: for any proposition r, P1 and P2 implies r. Again, the
subject’s belief is doxastically unjustified even though it’s based on his propositional
justifiers.

123
Synthese

As a response to these counterexamples of the original formulation of the orthodox


schema, some defenders of the schema draw the lesson that doxastic justification
requires not just basing, but basing in a proper way (thus the term ‘proper basing’),
on the propositional justifiers. (See, for instance, Smithies (2015, p. 2782) and Silva
(2015, p. 954).) Intuitively, the subjects’ beliefs in Turri’s examples are not properly
based on the propositional justifiers. Moreover, they think that Turri’s examples suggest
a natural understanding of what proper basing is when one’s belief is inferential. In
the examples, the subjects’ beliefs are based on his evidence because it’s inferred or
reasoned from the evidence, but the basing process is improper because the reasoning
is bad. So, whenever one’s basing involves reasoning, the basing is proper when the
reasoning is good. (Suggestion of this view of proper basing can be found in Smithies
(2015, p. 2782) and van Wietmarschen (2013, pp. 414–415).
Now, we have some understanding of what proper basing is for an inferential belief.
What about non-inferential beliefs? No explicit answers have been offered by those
defenders of the orthodox schema, partly because Turri’s counterexamples are all about
inferential beliefs as they stand. However, it seems that a causal theory of basing
comes in handy at this place. We can say that the proper basing of non-inferential
beliefs involve some kind of appropriate causation, ‘appropriate’ in a broadly reliabilist
sense.10
In sum, proper basing can be understood in the following way. When a belief p
is based on something E, the basing is a process transitioning from the mental state
representing or involving E to the mental state of believing that p. If the transition
is inferential, then the basing is proper when the reasoning involved is good. If the
transition is non-inferential, then the basing is proper when the relevant causal process
is reliable. Given that this sounds a good approximation of what proper basing is, let’s
assume that it’s along the correct line and see if PB would work.

3.4 Difficulties for PB

PB faces an immediate difficulty in accounting for how HOD defeats the doxastic
justification of a non-inferential belief. According to the above understanding of proper
basing, a non-inferential belief that p is properly based on the basis E when the belief
is caused by E and the causal process is reliable. But as we have discussed in Sect. 3.1,
gaining HOD allows availability of a reliable process. That is, gaining evidence of
unreliability doesn’t imply that any available process that would lead one to retain the
belief would be unreliable. But if a reliable process is available, there is simply no
reason to think that the process couldn’t operate in the presence of HOD and thus there

10 This understanding can accommodate the proper basing of the evil-demon victims’ non-inferential
beliefs, if we accept some reliabilist explanation of how these victims’ beliefs are in fact reliable. For
example, we can follow Goldman by claiming that the victims’ beliefs are properly based because they are
actually or normally reliable.
Moreover, even if you have doubts about any such explanation and thus have doubts about whether
reliability is necessary for proper-basing of non-inferential beliefs, presumably you can still accept that
reliability is sufficient. This acceptance is enough for my purpose. For my criticism of PB in Sect. 3.2 is that
one’s non-inferential belief can still be properly based when gaining HOD because it can still be produced
by a reliable process.

123
Synthese

is no reason to think that one’s belief couldn’t still be properly based. For example,
suppose that my belief that the president is in New York is produced by my 100%
reliable belief-forming process of clairvoyance. Suppose that I gain a misleading HOD
that I am affected by a drug that causes my clairvoyance to be unreliable. Since the
HOD is misleading, the clairvoyance process can continue to operate and thus can still
produce the belief that the president is in New York. When this process continues to
produce the belief, the belief will still be properly based.
So, PB faces difficulties in explaining higher-order defeat of non-inferential beliefs.
In what follows, I argue that this difficulty will also cause a problem in cases of
inferential beliefs. A part of my argument is that the PB theorist has to claim that
the higher-order defeat of inferential beliefs traces back to the higher-order defeat of
non-inferential beliefs. So, their difficulty in accounting for the latter implies difficulty
in accounting for the former.
First, let’s see how a PB theorist might explain why HOD makes an inferential
belief improperly based. In a recent paper, van Wietmarschen (2013, pp. 414–415)
argues that one’s belief cannot be a result of good reasoning when gaining HOD.
(Smithies 2015, p. 2787 offers a similar explanation.) This is because when one gains
HOD, the reasoning behind one’s maintaining the original belief involves dismissing
the HOD as misleading. Since such dismissal is unjustified, a reasoning that involves
such dismissal will be bad reasoning. Take Drug as an example. The reasoning that
leads one to retain the belief that p must involve the belief that one is not drugged. But
the belief that one is not drugged is unjustified due to the HOD that strongly supports
that one is drugged. In short, the reasoning in Drug becomes bad because it must take
the form ‘E; and I am not drugged; therefore, p’ and the second premise is unjustified.
Although this explanation sounds appealing, it won’t work. In Drug, if ‘E; and I
am not drugged; therefore, p’ is a form that my reasoning to p must take in order for
me to keep my belief, then it’s true that my belief that p cannot be properly based. But
the problem is that this is not a form my reasoning must take, because my reasoning
doesn’t need to involve the belief that I am not drugged. To see this, think about what
reasoning from one’s evidence E to p amounts to: it’s to believe that p because one
takes E to support p. No step in the reasoning has to mention anything about whether
one is drugged. If E is my evidence (and this is not changed by the drug evidence),
my reasoning to p from E will at most involve a belief that E supports p. I don’t need
to form any belief about whether I am drugged.11 If I believe that E supports p and
for this reason I believe that p, my belief that p is still a result of reasoning from E.
Perhaps my actual psychology is that when I believe that E supports p I would also
believe that I am not drugged. But the point is that it’s only the former belief that
must figure in my reasoning to p. The latter belief, justified or not, is not part of my
reasoning and therefore doesn’t make my reasoning to p bad.12
11 Note that here I am interpreting ‘taking E to support p’ as a belief that E supports p, in order to be most
charitable to the PB theorist. If the taking is a belief, at least it initially sounds plausible to say that the belief
is problematic because it is rendered unjustified by the drug evidence. But if the taking is not a belief but
some state like a disposition to believe p given E, then it’s unclear how it could be rendered problematic
by the drug evidence—the most natural way to problematize a disposition is to render it unreliable and the
drug evidence won’t make the disposition unreliable.
12 I am grateful to an anonymous referee for bringing up this possible answer: Perhaps the PB theorist
could say that even if my reasoning doesn’t involve the belief that I am not drugged, it still presupposes

123
Synthese

Now, a PB theorist will push back in this way: it doesn’t matter whether what
figures in my reasoning is a belief of not being drugged or a belief about the evi-
dential support. For even if it’s the latter, my reasoning will still be bad, because
my belief about the evidential support will also be unjustified due to the drug evi-
dence.
If a PB theorist makes this move, he is essentially saying that HOD defeats one’s
first-order belief by defeating the second-order belief about evidential support. This
move faces two severe problems.
First, it only applies to a limited range of higher-order defeat. Recall that HOD is
evidence of unreliability, namely, evidence that one is unlikely to reach a true belief
in the circumstance. It’s true that, in some cases such as Drug, such unreliability is
generated by unreliability in assessing evidential connection. But as I have discussed
in Sect. 2, the unreliability could also be generated by unreliability in collecting or
identifying evidence. If so, we could imagine a variation of Drug in which the drug
only affects one’s ability in collecting or identifying evidence and leaves one’s ability
in assessing evidence intact. In such a case, if one initially held a justified belief
about the evidential connection, one still does so when acquiring the evidence of
being drugged. So, PB couldn’t account for higher-order defeat in such cases. This is
the first problem with the PB’s theorist move that HOD defeats first-order belief by
defeating the second-order belief about evidential support.
Second, even in those cases where the unreliability in question involves unreliability
in assessing evidence, the PB theorist still faces a severe difficulty. In these cases,
it’s presumably true that one’s second-order belief about the evidential connection is
unjustified. So, the PB theorist can indeed tell a plausible story about why the higher-
order defeat happens at the level of one’s first-order belief. However, the PB theorist
will run into a problem in explaining why the defeat also happens at the second-order
level. Let me explain.
The first thing to note is that, when one’s belief about the evidential connection is
defeated because one gains evidence that one is unreliable in assessing evidence, the
defeat happing to one’s second-order belief would also be an instance of higher-
order defeat. For evidence of unreliability in assessing whether E supports p is
HOD about whether E supports p, in the same sense that evidence of unreliability
in assessing whether p is HOD about p. So, a PB theorist should explain how the
defeat happens at the second-order level—the level involving the belief ‘E supports
p.’
How would a PB theorist explain the higher-order defeat at the second-order level?
For the sake of consistency, he has to say that the second-order belief is defeated
because it’s rendered improperly based by the HOD. So, the upshot is this: the PB
theorist has to trace the improper basing of one’s first-order belief that p to the improper
basing of one’s second-order belief that E supports p.

Footnote 12 continued
this belief in the sense that the reasoning is incompatible with believing that I am drugged. My response
is that it’s hard to see how the drug belief is incompatible with the reasoning. It seems that for a piece of
reasoning to be good reasoning, it’s sufficient that all of its premises are justified (which I argue is possible
in the latter discussion) and one infers the conclusion by following a good rule. Perhaps the belief that one
is drugged can induce some higher-order doubt on whether one’s reasoning is good. But it seems that one
could carry out a perfect reasoning while doubting whether the reasoning is good.

123
Synthese

But this would lead to a regress. For if we ask how my second-order belief is rendered
improperly based by the HOD, the PB theorist will have to appeal to the improper
basing of third-order beliefs about evidential support, beliefs such as ‘E* supports ‘E
supports p’.’ The regress would be unproblematic if it could go on forever, that is, if we
could have increasingly higher-order beliefs about evidential support. But the regress
couldn’t go on forever. Eventually we will reach a belief about evidential support that
is not inferred from further evidence (call these ‘basic evidential beliefs’). Consider
beliefs of the form “if P then Q’ and ‘P’ support ‘Q” or of the form ‘that human
beings have observed P everyday in the past supports that P will be true tomorrow.’
It’s a familiar point that justifiably holding these basic beliefs cannot require that these
beliefs be inferential, since any inference from the putative further evidence to those
beliefs would be circular. If these beliefs are non-inferential, then as we have discussed
in Sect. 3.3, the proper basing of such beliefs might only involve reliable causation.
But as we have seen in Sect. 3.1, gaining HOD doesn’t need to make a reliable process
unavailable. In fact, it’s hard to see how the HOD in Drug or Hypoxia would make
one unreliable in forming such basic evidential beliefs. Evidence that I am drugged
so that my ability is damaged in solving a logical puzzle needs not be evidence that I
am not able to judge things as basic as whether ‘P’ and ‘if P then Q’ supports Q.13
Now, let’s review the process of how the above two problems arises for PB to
account for higher-order defeat in an inferential belief. Suppose my belief that p is a
result of inference from evidence E. The PB theorist claims that HOD defeats my belief
that p by making it improperly based. Since proper basing in inferential beliefs can be
understood as good reasoning, he will have to say that the HOD ruins my reasoning
from E to p. But given that I still have evidence E when acquiring HOD, my reasoning
from E to p would be good if I believe that p because I justifiably hold the second-order
belief that E supports p. So, the PB theorist will have to say that the HOD makes the
second-order belief unjustified. This suggestion faces two severe problems. First, it
doesn’t apply when the HOD is not about one’s unreliability in evidence-assessment
but about one’s unreliability in evidence-collection. Second, in explaining why the
higher-order defeat happens at the second-order level, the PB theorist has to trace the
improper basing to the improper basing of one’s basic evidential beliefs, but it’s hard
to see how such basic beliefs are rendered improperly based by the HOD in typical
cases of higher-order defeat.
To conclude, it’s implausible to say that HOD defeats one’s doxastic justification by
making one’s belief improperly based. Assuming the schema that doxastic justification
is proper basing on propositional justifiers, we should reconsider the proposal that
HOD defeats doxastic justification by defeating propositional justification. In the next
section, I cash out this proposal by arguing that, unlike traditional evidentialist or
reliabilist theories of propositional justification, a responsibilist theory of propositional
justification can nicely account for higher-order defeat.

13 To further support this point, consider a prominent theory about what these basic evidential beliefs are
based on. According to Boghossian (2014), my basic belief that ‘P’ and ‘if P then Q’ supports ‘Q’ is based
on my grasp of the meaning of the term ‘if.’ Since evidence that I am drugged so that I am not able to solve
the logical puzzle needs not be evidence that I don’t grasp the meaning of the term ‘if,’ the drug evidence
needs not be evidence that the above basic belief of mine couldn’t be properly based.

123
Synthese

4 The responsibility proposal

This is my explanation of why HOD defeats doxastic justification: when one gains
HOD, one no longer has propositional justification to hold one’s belief because there
is in principle no responsible way to maintain one’s belief in the presence of HOD.
In what follows, I will first explain what the responsibility condition means; then I
will explain how this condition can be used to explain higher-order defeat. If I am
successful, this lends strong support for the view that doxastic justification requires
responsibility, a view that is regaining traction in the recent development of theories
of justification (see Peels 2016a, b).

4.1 What is responsibility?

As a warm up to understand the responsibility condition, let me note that there is


a familiar kind of counterexample to evidentialism that resembles cases involving
higher-order defeat in the following two respects: the subject’s belief is properly based
on evidence that supports the proposition believed, and yet intuitively the belief is
unjustified. These are the so-called cases of ‘defective inquiry.’ Consider this case
from Baher (2009, p. 547):
Defective inquiry
George represents the epitome of intellectual laziness and obliviousness. He goes
about his daily routine focusing only on the most immediate and practical of
concerns. He lacks any natural curiosity. Unsurprisingly, he holds many beliefs
he shouldn’t. Among them is the belief that exposure to second-hand smoke
is harmless to health. He believes so because he remembers that some years
ago a reliable source tells him that this is confirmed by a considerable amount
of research. However, this belief is attributable to his intellectual laziness and
obliviousness. Had he been slightly more attentive to the well-publicized research
on the risk of secondhand smoke, he wouldn’t hold this belief.
In this example, George’s belief fits the evidence he has. After all, he has no reason to
doubt the reliability of the source in question or the reliability of his memory. And his
belief is properly based on his memory. But it seems that he shouldn’t hold the belief.
If justified beliefs are those one should have or at least is permissible to have, then it
seems that George’s belief is unjustified.14

14 On whether this deontological talk of justification is appropriate in the context of attacking evidentialism,
see Baehr 2009, ft. 13.

123
Synthese

What makes George’s belief unjustified? A natural answer is that it’s unjusti-
fied because it’s a result of intellectual irresponsibility. That is, George has certain
intellectual obligations when it comes to belief-formation, and he doesn’t do what’s
reasonably expected to ensure that he meets those obligations. Just like a father
is responsible in handling a situation involving his children when he does what
he is obligated to do with regards to his children, and a judge is responsible
in ruling a case when he does what he is obligated to do with regard to the
ruling, a person is responsible in holding a belief when he does what he is obligated
to do with regard to the belief.
What are these obligations that generate responsibility? The answer depends on
where obligations come from. I will not discuss the issue in detail here, but it’s
plausible to claim that an agent’s obligations sometimes come from the very role the
agent plays (Feldman 2000, p. 676). So, a father has the obligation to do what he’s
reasonably expected to ensure that his children have good life, because it’s constitutive
of being a father that a father has this obligation; a judge has the obligation to do what’s
reasonably expected to ensure that justice is served, because it’s constitutive of being
a judge that a judge has this obligation; and a believer has the obligation to do what he
is reasonably expected to ensure that he holds a true belief, because it’s constitutive
of being a believer that a believer has this obligation. So, in Defective Inquiry, George
is irresponsible in his belief-formation: as a normal adult living in the twenty-first
century, he is expected to be aware of the well-publicized evidence supporting the
harmfulness of second-hand smoking. His ignorance of the evidence shows that he
doesn’t do what’s reasonably expected to ensure that he holds a true belief about the
issue.
What emerges from the above discussion is the following characterization of respon-
sibility:
(R) One is responsible in holding a belief that p just in case that, in forming and
maintaining the belief, one does what’s reasonably expected to ensure that one
believes that p if and only if p is true.
I will leave it vague as to what counts as ‘reasonable expectation’ except to say that
it depends on the agent’s evidential situation. This is appropriate, considering that
responsibility in other areas is also not a clear-cut notion. When we claim that a judge
is responsible in ruling a case when he does what’s reasonably expected to ensure
that justice is served, the claim is well understood even if the notion of reasonable
expectation is also left vague. Similarly, our intuition in Defective Inquiry is clear that
George doesn’t do what’s reasonably expected to ensure that he holds a true belief
about whether second-hand smoking is harmful, even though the notion of reasonable
expectation is vague.
The idea that justification requires the sense of responsibility as is put in (R) is not
new. It’s suggested in Chisholm (1977), Kornblith (1983) and Wedgwood (1999). The
idea has recently been defended by a series of works of Peels (2016a, b), who argues—-
convincingly in my opinion—that the responsibility condition is under -appreciated in

123
Synthese

the literature on theories of justification.15 I won’t repeat their arguments here. Rather,
I will defend this idea by explaining its power in handling higher-order defeat.16

4.2 How the responsibility condition accounts for higher-order defeat

Consider this common case of moral irresponsibility. A judge is about to be assigned


to a case. But then he gets strong (although misleading) evidence that the defendant in
the case is his old lover who has cheated on him. Given his conflicted feelings toward
the lover, there is no way to predict how he will perform on the ruling. Perhaps his
residual feeling will lead him to rule in her favor, or perhaps his hatred will lead him
to rule against her. Despite the conflicted feelings, the judge refrains from recusing
himself.
The judge in this example is being morally irresponsible. Given the evidence, he
should think that it’s not very likely for him to rule justly. If so, our reasonable expec-
tation is that he recuses himself from the case. So, when he refrains from recusing
himself, he is not doing what’s reasonably expected to ensure that justice is served.
Intellectual responsibility works in the same way. When you get evidence that you
are not very likely to reach a true belief in the situation, deciding to keep your belief
is intellectually irresponsible. You would not be doing what’s reasonably expected to
ensure that you hold the belief if and only if it’s true. In fact, as long as the HOD is
present, there is no responsible way for you, or for any person in your current evidential
situation, to maintain the belief at the moment. This is why HOD makes you no longer
have propositional justification to hold your belief. So, here is my ‘responsibility
proposal’:
(PJR) S is propositionally justified at t in believing that p only if, in principle,
there is a responsible way for a person in S’s evidential situation at t to hold
the belief that p.
Two notes about PJR are in order.
First, PJR is about what propositional justification requires, and thus it’s more spe-
cific than the above-mentioned idea that justification requires responsibility. That idea
merely says that doxastic justification requires that one’s belief is actually responsibly
15 I should mention that Conee and Feldman (2004, p. 233) reject the idea that justification requires
responsibility in their response to cases like Defective Inquiry. They distinguish ‘current-state justification’
from ‘methodological justification.’ The former is about what to believe given one’s current evidence; the
latter is about what method to adopt given one’s goals (which might include one’s cognitive goals). So,
George’s belief is justified in the former sense, although it’s unjustified in the latter sense since he has used a
bad method in the past. Moreover, in an earlier paper (Feldman and Conee 1985, pp. 21–23), they distinguish
between epistemic justification from prudential justification to handle cases like Defective Inquiry, and they
claim that George’s belief has the former although it lacks the latter.
My response is that, even if their distinctions can handle cases like Defective Inquiry, it cannot explain
higher-order defeat. Gaining HOD doesn’t imply that one has used a bad method in the past. (In some
cases, HOD can be evidence that one has used bad method in the past, but this evidence can be misleading.)
Besides, intuitively, HOD does make one’s belief epistemically unjustified.
16 Note that the requirement of responsibility is separate from requirement of proper basing. That one’s
belief is irresponsibly formed doesn’t mean it’s improperly based. In Defective Inquiry, George’s belief is
irresponsibly formed since he is lazy in collecting evidence. However, his belief is still a result of good
reasoning from his evidence, so it’s still properly based on his evidence.

123
Synthese

held; it leaves open whether responsibility is a requirement of propositional justifica-


tion or a requirement of proper basing.
Second, as a requirement on propositional justification, PJR requires only that
there is in principle a responsible way to hold the belief; it doesn’t require that there
is a responsible way for the subject in question. And in determining whether there is a
responsible way in principle, we should not factor in the subject’s cognitive limitations
but should consider whether a somewhat idealized person can responsibly hold the
belief in the subject’s evidential situation. So, when S has evidence that P and also
evidence that P entails Q but is so dead drunk to responsibly coming to believe Q by
performing the simple inference of modus ponens, S still has propositional justification
for believing Q because a person with better cognitive capacity than S at the moment
can responsibly come to believe Q by performing the inference.17
I hope that these notes make PJR clearer. In what follows, I explain PJR in greater
details and I argue that it has various important virtues in dealing with the phenomenon
of higher-order defeat.

4.2.1 Accounting for how pure HOD defeats justification

Note that the judge in the above example doesn’t have evidence that he will be likely
to rule unjustly; he only has evidence that it’s not very likely that he will rule justly.
And this evidence is enough to make it irresponsible for him not to recuse himself.
Similarly, for HOD to imply irresponsibility in retaining belief, it doesn’t need to be
evidence of anti-reliability (i.e. evidence that you are likely to reach a false belief).
It only needs to be evidence of unreliability (i.e. evidence that you are not likely to
reach a true belief). Therefore, PJR can nicely account for how pure HOD defeats
justification.

4.2.2 Accounting for all cases of higher-order defeat

Unlike PB, PJR applies to all cases of higher-order defeat. All cases of higher-order
defeat involve evidence of unreliability in reaching a true belief. No matter whether
the unreliability in question is unreliability in assessing evidence or unreliability in
other respects like collecting or identifying evidence, when you acquire evidence of
unreliability, maintaining your belief is not doing what’s reasonably expected to ensure
that you hold a true belief.
Note that, for HOD to defeat justification, it doesn’t matter whether you form the
higher-order belief that you are unreliable. For instance, in Drug, even if the student
stubbornly believes that he is not being drugged, it’s still irresponsible for him to retain
his first-order belief about the solution to the logical puzzle. If he stubbornly believes
that he is not being drugged despite the clear evidence suggesting otherwise, he is
being irresponsible in holding the higher-order belief (the belief that he is not being
drugged). This irresponsibility in holding the higher-order belief will trickle down to
his first-order belief, since it’s clear that if the former is unlikely to be true then the
latter would also unlikely to be true.

17 I am grateful to an anonymous referee for the discussion here.

123
Synthese

I should note that, although irresponsibility in one’s higher-order beliefs could


trickle down to one’s first-order belief, irresponsibility in one’s first-order belief doesn’t
always need to be traced back to irresponsibility in one’s higher-order belief. Espe-
cially, it doesn’t need to be traced back to irresponsibility in beliefs of the form ‘E
supports p.’ Recall Defective Inquiry. George might justifiably believe (and thus also
responsibly believes) that his memory evidence supports that second-hand smoke
poses no risk to health. And yet he is still irresponsible in holding the first-order
belief. So, unlike PB, PJR won’t lead to a regress.

4.2.3 Accounting for the particular defeating effect of HOD

PJR not only explains how HOD defeats justification but also captures the particular
defeating effect that defenders of higher-order defeat have in mind. In my discussion of
PB, I have mentioned that HOD is typically taken to require one to suspend judgment,
not just to hold the belief in a new way. This defeating effect is captured by PJR
because, when you have HOD, there is simply no way for you or any other person in
your evidential situation to responsibly hold the belief. As long as you have the HOD,
it will be irresponsible to maintain the belief, no matter how you revise the basing
process of the belief. Of course, there will be a responsible way to hold the belief
if you do further check and find that the HOD is misleading (for example, you can
collect further evidence and find that you are actually not drugged). But this would
place you in a different evidential situation. What matters is that, given your current
evidential situation, there is in principle no way to responsibly hold the belief.
To further see how PJR accounts for the particular defeating effect that proponents
of higher-order defeat have in mind, it’s helpful to consider what happens when you
lack HOD and yet out of paranoia you form the higher-order belief that you are
unlikely to reach a true belief about p. In such a case, this paranoid belief would also
make it irresponsible for you to maintain the belief that p. (That is, although believing
that you are not being drugged when you have the drug evidence wouldn’t save you
from being irresponsible, believing that you are drugged without the relevant drug
evidence would make you irresponsible). However, this paranoia doesn’t imply that
you no longer have propositional justification to hold your belief. This is because, given
your evidential situation, there is in principle a responsible way to hold the belief, a
way that is available for another person who doesn’t have your paranoid higher-order
belief. So, what you should do is to give up the paranoid belief of being drugged, not
your original first-order belief.

4.2.4 Accounting for why evidence of irrationality that is not evidence


of unreliability lacks defeating power

In Sect. 2, I have argued that we should characterize HOD as evidence of unreliability


rather than evidence of irrationality. This claim is nicely vindicated by PJR, for PJR
implies that evidence of irrationality has no defeating power when it’s not also evidence
of unreliability. When evidence of irrationality is not also evidence of unreliability,
it doesn’t make it irresponsible for one to hold the belief. This is because, as I’ve
said, responsibility is tied to obligations. We have responsibilities to do what we

123
Synthese

are obligated to do. And our most fundamental intellectual obligation as a believer
is to make sure that we hold true beliefs. We might have further obligations, such as
obligations to make sure that our belief is rational or is knowledge, but these obligations
are derivative from the obligation to believe the truth.18 So, we don’t have responsibility
to ensure that our beliefs are rational that is above and beyond the responsibility to
ensure that our beliefs are true. Therefore, we cannot be accused of being irresponsible
if we maintain belief in the presence of evidence of irrationality when that evidence
is not also evidence of unreliability.

4.2.5 Dissolving a putative dilemma resulted by higher-order defeat

The last but not the least important virtue of PJR is that it can dissolve a putative
dilemma resulted by higher-order defeat (or by the level-bridging principle). The
dilemma arises in those cases where one’s total evidence still supports the believed
proposition when gaining HOD. In those cases, one is required to keep one’s belief
since one is required to believe whatever is supported by evidence, but one is also
required to suspend judgment given the HOD (otherwise one will violate the level-
bridging principle.)
Thus far, reactions to the dilemma include: to deny higher-order defeat and perhaps
also the level-bridging principle (Lasonen-Aarnio 2014)19 ; to claim that we are not
required to believe what’s supported by evidence at all (Worsnip 2018; Littlejohn
2015); and to simply accept the dilemma (Christensen 2007a, 2010). All options have
serious costs.
PJR can dissolve the dilemma without incurring any costs. It will claim that,
although we normally have the obligation to believe whatever is supported by evi-
dence, we don’t have such obligation when there is HOD. Consider this question:
why would we think that we have the obligation to believe whatever is supported by
evidence? A plausible answer is that believing according to evidence is normally the
best way to try to fulfill our intellectual responsibility, namely, to ensure that we hold
true beliefs. But when we acquire HOD, this is no longer the best way to try to fulfill
our responsibility. Instead, we would violate our responsibility with regard to truth if
we maintain the belief that is in fact supported by evidence. So, it’s no wonder that
evidential support would lose its normative power when we have HOD.
The key to this dissolution of the dilemma is to recognize that we don’t have
fundamental intellectual obligation to believe whatever is supported by evidence,
and we don’t have fundamental intellectual obligation to respect HOD (or the level-
bridging principle.) The only fundamental intellectual obligation we have is to do
what is reasonably expected to ensure that we hold true beliefs. So, when the norm of
believing what’s supported by evidence gives a verdict inconsistent with the one given

18 One reason to think so is that we can easily explain where the fundamental obligation of believing the
truths comes from—it comes from the constitutive truth aim of belief. But if we claim that believers also
have fundamental obligation to make sure that their beliefs are rational or are knowledge, it’s hard to give
a similar story: a belief doesn’t have a constitutive aim to be rational or to be knowledge. See Wedgwood
(2002) for more discussion on the truth aim of belief.
19 Titelbaum’s (2015) position is subtler: he denies higher-order defeat but still maintains the level-bridging
principle.

123
Synthese

by our fundamental norm, the norm’s normative power will be overridden. It’s the
same case with the norm of respecting HOD. In our actual world, respecting HOD is
largely helpful in fulfilling our fundamental obligation. But if you were to live in such
a world where you find that respecting HOD has systematically led you to miss out on
true beliefs (because, unlike our world, HOD in that world is largely misleading), then
the norm of respecting HOD would not help fulfilling our fundamental obligation and
thus would lose its normative power.
To sum up, PJR has various benefits: It covers all cases of higher-order defeat; It
avoids the problems that PB faces; It captures the distinctive defeating power of HOD;
It explains why evidence of irrationality that is not also evidence of unreliability
has no defeating power; And it dissolves an important dilemma resulted by higher-
order defeat. Besides, PJR is not ad hoc. As above-mentioned, PJR is in line with
the idea that justification requires responsibility, an idea that has a prominent history
and has received serious defense recently. So, we should seriously consider the idea
that propositional justification (and thus doxastic justification) requires intellectual
responsibility.

5 Conclusion

In this paper, I have argued against the proposal that HOD defeats doxastic justification
by making one’s belief improperly based, and I argue that it’s better to think that HOD
defeats doxastic justification by defeating propositional justification. I cash out this
idea by explaining how treating the condition of there being a responsible way to hold
one’s belief as required for propositional justification can nicely account for higher-
order defeat. My arguments, if successful, would lend strong support for the view
that responsibility is required for propositional justification and thus is required for
doxastic justification. After all, the issue of how the doxastic justification of a belief
is defeated is closely connected to the issue of what doxastic justification involves.
So, a theory of justification that can nicely explain defeat is preferable to a theory that
cannot. And insofar as responsibility is closely connected to deontological theories
and virtue theories of justification, this paper also supports thinking of justification in
deontological or virtue-theoretic terms.20

References
Baehr, J. (2009). Evidentialism, vice, and virtue. Philosophy and Phenomenological Research, 78(3),
545–567.
Boghossian, P. (2014). What is inference? Philosophical Studies, 169(1), 1–18.
Broome, J. (2013). Rationality through reasoning. Oxford: Blackwell.
Chisholm, R. M. (1977). Theory of knowledge (2nd ed.). Englewood Cliff, NJ: Prentice-Hall.
Christensen, D. (2007a). Does murphy’s law apply in epistemology? Self-doubt and rational ideals. Oxford
Studies in Epistemology, 2, 3–31.

20 For comments and discussion, I am grateful to Sophie Horowitz, David Christensen, Nico Silins, Jin
Zeng, Lu Teng, Matt Lutz, and three anonymous referees.

123
Synthese

Christensen, D. (2007b). Epistemology of disagreement: the good news. Philosophical Review, 116,
187–217.
Christensen, D. (2010). Higher-order evidence. Philosophy and Phenomenological Research, 81(1),
185–215.
Christensen, D. (2016). Disagreement, drugs, etc: from accuracy to akrasia. Episteme, 13(4), 397–422.
Coates, A. (2012). Rational epistemic akrasia. American Philosophical Quarterly, 49, 113–124.
Conee, E., & Feldman, R. (2004). Evidentialism. New York, NY: Oxford University Press.
Elga, A. (2013). The puzzle of the unmarked clock and the new rational reflection principle. Philosophical
Studies, 164(1), 127–139.
Feldman, R. (2000). The ethics of belief. Philosophy and Phenomenological Research, 60(3), 667–695.
Feldman, R. (2005). Respecting the evidence. Philosophical Perspectives, 19, 95–119.
Feldman, R., & Conee, E. (1985). Evidentialism. Philosophical Studies, 48(1), 15–34.
Foley, R. (1990). Fumerton’s puzzle. Journal of Philosophical Research, 15, 109–113.
Foley, R. (2001). The foundational role of epistemology in a general theory of rationality. In A. Fairweather &
L. Zagzebski (Eds.), Virtue epistemology: Essays on epistemic virtue and responsibility (pp. 214–231).
Oxford: Oxford University Press.
Goldman, A. I. (1979). What is justified belief? In G. S. Pappas (Ed.), Justification and knowledge (pp. 1–25).
Dordrecht: Reidel.
Greco, D. (2014). A puzzle about epistemic akrasia. Philosophical Studies, 167, 201–219.
Horowitz, S. (2014). Epistemic akrasia. Nous, 48, 718–744.
Huemer, M. (2011). The puzzle of metacoherence. Philosophy and Phenomenological Research, 82(1),
1–21.
Ichikawa, J., & Jarvis, B. (2013). The rules of thought. Oxford: Oxford University Press.
Kelly, T. (2010). Peer disagreement and higher order evidence. In A. I. Goldman & D. Whitcomb (Eds.),
Social epistemology: Essential readings (pp. 183–217). Oxford: Oxford University Press.
Korcz, K. A. (2010). The epistemic basing relation. In: Edward N. Zalta (Ed.), The Stanford encyclopedia
of philosophy (Spring 2010 Edition), http://plato.stanford.edu/archives/spr2010/entries/basing-episte
mic/.
Kornblith, H. (1983). Justified belief and epistemically responsible action. Philosophical Review, 92(1),
33–48.
Lasonen-Aarnio, M. (2014). Higher-order evidence and the limits of defeat. Philosophy and Phenomeno-
logical Research, 88(2), 314–345.
Littlejohn, C. (2015). Stop making sense? On a puzzle about rationality. Philosophy and Phenomenological
Research. https://doi.org/10.1111/phpr.12271.
Peels, R. (2016a). Responsible belief: An essay at the intersection of ethics and epistemology. New York,
NY: Oxford University Press.
Peels, R. (2016b). Responsible belief and epistemic justification. Synthese. https://doi.org/10.1007/s1122
9-016-1038-8.
Pollock, J., & Cruz, J. (1999). Contemporary theories of knowledge (2nd ed.). Totowa, NJ: Rowman &
Littlefield.
Silva, P., Jr. (2015). On doxastic justification and properly basing one’s beliefs. Erkenntnis, 80(5), 945–955.
Smithies, D. (2012). Moore’s paradox and the accessibility of justification. Philosophy and Phenomeno-
logical Research, 85, 273–300.
Smithies, D. (2015). Ideal rationality and logical omniscience. Synthese, 192(9), 2769–2793.
Sylvan, K. (2016). Epistemic reasons II: Basing. Philosophy Compass, 11(7), 377–389.
Titelbaum, M. (2015). Rationality’s fixed point (or: In defense of right reason). In J. Hawthorne & T. Gendler
(Eds.), Oxford studies in epistemology (Vol. 5, pp. 253–294). Oxford: Oxford University Press.
Turri, J. (2010). On the relationship between propositional and doxastic justification. Philosophy and Phe-
nomenological Research, 80(2), 312–326.
van Wietmarschen, H. (2013). Peer disagreement, evidence, and well-groundedness. Philosophical Review,
122(3), 395–425.
Wedgwood, R. (1999). The a priori rules of rationality. Philosophical and Phenomenological Research,
59(1), 113–131.
Wedgwood, R. (2002). The aim of belief. Philosophical Perspectives, 36(s16), 267–297.
Williamson, T. (2011). Improbable knowing. In T. Dougherty (Ed.), Evidentialism and its discontents
(pp. 147–164). Oxford: Oxford University Press.

123
Synthese

Worsnip, A. (2018). The conflict of evidence and coherence. Philosophy and Phenomenological Research,
96(1), 3–44.
Ye, R. (2015). Fumerton’s puzzle for theories of rationality. Australasian Journal of Philosophy, 93(1),
93–108.

123

You might also like