Philosophy Compass (2012) 7/9: 585-596
F A L L IB IL IS M
Baron Reed
Northwestern University
[email protected]
Many of the central figures in the history of epistemology have held that
knowledge must be grounded in foundations that are infallible.
Although these foundations have been characterized in a variety of
ways, in each case it has been agreed that they are infallible in that they
preclude error on the part of the person who has them.1 The doctrine of
fallibilism stands in opposition to this picture. Roughly stated, the basic
idea is that the subject can know something even though it could have
been false. This is not the same as saying that the subject can know
something that is false—it is very widely accepted by philosophers that,
if a belief counts as knowledge, it is true. Rather, the claim is that a
belief held with a particular epistemic grounding can be knowledge even
though the subject could have held that belief with the same grounding in
circumstances where the belief is false (and, of course, in those
circumstances the belief would not count as knowledge).2 To put the
claim a different way, the epistemic grounding that is sufficient for
knowledge does not preclude the possibility of error.
1. THE MOTIVATION FOR FALLIBILISM
While infallibilist views have been prominent historically, it is now
fallibilism that enjoys widespread popularity. There have been two
motivations leading to this change. First, one of the most common
responses to skeptical arguments has been to treat them as depending
on a stringent conception of knowledge.3 Because our cognitive faculties
are imperfect, the thought goes, there is no way we could meet the
demands of such a conception of knowledge. But our faculties are still
very good; surely they allow us to achieve a more modest sort of
cognitive success. Fallibilism, then, takes that modest success to be
knowledge.
Second, on the assumption that we do have the variety and amount
of knowledge that we think we have, infallibilism does not seem to
model it very well.4 For example, one of the things that I currently take
2
myself to know is the fact that the jaguar is the biggest cat in the
Americas. But, if I were asked how that knowledge is grounded in such
a way that it is infallible (say, by relation to the given element in
experience), I would be utterly at a loss.5 Not only do I not remember
the way in which I acquired the belief, it also seems to me that it could
be wrong. I don’t think that it is wrong, but I also wouldn’t be terribly
surprised if it were.
Although there are still some defenders of infallibilism, it has
largely been supplanted by fallibilism as the dominant framework in
contemporary epistemology.6 As Stewart Cohen has said, “the
acceptance of fallibilism in epistemology is virtually universal.”7 But
there are many, widely divergent ways in which this framework has been
developed—ranging from externalist views like Alvin Goldman’s
reliabilism to internalist views like Earl Conee’s and Richard Feldman’s
evidentialism.8 For that reason, fallibilism serves both as the doctrine
that unifies contemporary epistemologists and as a primary source of
the most serious problems they still face.
2. A MORE PRECISE ACCOUNT OF FALLIBILISM
Fallibilism has most commonly been formulated in two ways.9
According to the first, a person S knows that p in a fallibilist way just in
case S knows that p on the basis of some justification j and yet S's belief
that p on the basis of j could have been false (or mistaken or in error).10
For example, I know that the Cubs beat the Dodgers the last time they
played; I know it because my brother told me what happened, and he is
usually reliable about this sort of thing. But, if he had misread the box
score in the newspaper, I still would have believed him. In that case, my
belief would have been held with the same justification, but the belief
would have been false. Because this case is possible, the knowledge I
actually have (when my brother did not misread the box score) is
fallible.11
According to the second common formulation of fallibilism, S
knows that p in a fallibilist way just in case S knows that p on the basis
of some justification j and yet j does not entail (or guarantee) that p.12 In
the above example, it is my brother's testimony that enables me to know
that the Cubs beat the Dodgers. But my brother's saying that the Cubs
beat the Dodgers does not entail that this is so. His testimony is
logically consistent with the Cubs having beaten the Dodgers, but it is
3
also consistent with various other possibilities: the Dodgers may have
beaten the Cubs and yet my brother misread the box score or misspoke
or uncharacteristically told a lie, etc.
These two formulations of fallibilism are equivalent, given the
standard interpretation of entailment. To say that j entails p is to say
that it is impossible for j to be true and p to be false. So, to deny that j
entails p, as the second formulation does, is to say that it is possible for j
to be true and yet p false. And this is just what the first formulation
says: S could have justification j even in cases where it is false that p.
Both of these formulations, however, fall prey to the same problem:
neither is able to allow for fallibilistic knowledge of necessary truths.13
Let us suppose that it is necessarily true that p. In that case, it is
impossible for S to be in a situation where her justification for this belief
is true and yet it is false that p. This will be so for any justification, no
matter how poorly it might otherwise seem to justify belief that p.
Similarly, where it is necessarily true that p, every justification will
entail that p. But this will be so simply because everything entails a
necessary truth.
There are two lessons to be learned from this problem. First,
entailment is not itself an epistemic relation.14 Although it would
valuable for a subject to know (or grasp or be acquainted with the fact)
that her evidence entailed a proposition, it is not the entailment relation
itself that would allow her to know the entailed proposition. Given that
the entailment relation can hold even in cases where there is no
epistemic link between a subject’s grounds for belief and the belief
itself, this is a lesson that should be accepted by fallibilists and
infallibilists alike.
Second, fallibilism requires a broader formulation: S knows that p in
a fallible way just in case S knows that p on the basis of some
justification j and yet S's belief that p on the basis of j could have failed to
be knowledge. There are two ways in which this might be so. First, the
subject’s belief could have been false. This, of course, has been the point
of focus for those who give the first of the common formulations above.
But, second, the belief could have been true but only by accident.
Suppose, for example that Julia has walked by the same clock tower
every day for years and has always found it to be accurate. Today, she
sees that it says the time is two o’clock. She comes to believe this, and
her true belief is both justified and an instance of knowledge.
4
Nevertheless, it could have been the case that the clock had stopped
exactly twelve hours earlier, in which case it would no longer be
generally accurate but still would have, by chance, indicated the correct
time. In that case, Julia would have believed it was two o’clock, and her
belief would have been both true and justified (given her long track
record of deriving knowledge from using the clock). But her belief
would have failed to have been knowledge.15
Notice that the fallible nature of Julia’s knowledge about the time
can be explained in two ways: it could have been accidentally true (as
above), or it could have been false (if, for example, the clock had
stopped ten, rather than twelve, hours before she walked past it). In the
case of knowledge of necessary truths, however, this is not so—they
could not have been false, no matter what the quality of the subject’s
justification might be. Even so, a subject’s belief regarding a necessary
proposition could have been true by accident. For example, a student
may learn many mathematical truths from her teacher, who is generally
a highly reliable source for testimony of this sort. But the teacher,
excellent though she is, could have made two mistakes on a single
problem: when she performed the calculation, and then again when she
wrote the answer on the board, she transposed the numbers. In some
cases, these two mistakes will cancel each other out—the number she
mistakenly writes on the board turns out to be the correct answer to the
problem. If this were to happen, the student would have a belief that is
true (indeed, necessarily true) and justified (given the teacher’s general
reliability, the student’s well-placed trust in the teacher, etc.) but not
knowledge. Such cases are rare, no doubt, but their possibility means
that the student’s knowledge, acquired on the basis of her teacher’s
testimony, is fallible.
What makes fallibilistic knowledge possible is the fact that it is
grounded in something other than entailment—something more loosely
connected with the truth of the subject’s belief. This looser grounding is
probability, which permits both of the ways in which a subject’s justified
belief can fail to be knowledge: P can make Q probable even though Q is
false, and P can make Q probable even though Q is true for reasons that
have nothing to do with P (and, in that sense, is accidentally true).16
This permits a second way of thinking about fallibilism, which can
replace the entailment formulation above: S knows that p in a fallible
way just in case S knows that p on the basis of some justification j,
5
where j makes probable that p.17
There is widespread disagreement over how best to understand
probability in epistemology. Some philosophers have taken epistemic
probability to be a relation holding between propositions, perhaps
knowable a priori.18 Others have taken the relevant sort of probability to
be grounded in relative frequencies or propensities.19 Nevertheless, the
fact that these theories are recognizably grounding justification and
knowledge in probability marks all of them as versions of fallibilism.
3. THE GETTIER PROBLEM
If philosophers have been pushed to fallibilism by skeptical worries, it is
only fair to note that fallibilism itself has presented a number of
significant epistemological problems. Chief among these is the so-called
Gettier problem, which has to do with accidental truth of the sort
discussed in the previous section.20 In his classic paper, Edmund Gettier
argued that any account of knowledge that (a) allowed the possibility of
justified but false beliefs and (b) allowed justification to be transferred
by deduction would be unable to account for the following sort of case.
Smith has a great deal of evidence for the proposition that Jones owns a
Ford (such as remembering that Jones has always driven a Ford, has
spoken about having to make his car payments, etc.). At the same time,
Smith has no evidence that his friend Brown is traveling, but he knows
that each of the following propositions is entailed by the proposition for
which he has a lot of evidence: that either Jones owns a Ford or Brown
is in Boston, that either Jones owns a Ford or Brown is in Barcelona, and
that either Jones owns a Ford or Brown is in Brest-Litovsk. As it
happens, Jones has just sold his Ford, but Brown really is in Barcelona.
So, while two of his inferred beliefs are justified and false, Smith’s belief
that either Jones owns a Ford or Brown is in Barcelona is both justified
and true. But, surely, it is not knowledge. So-called Gettier cases have
generally been taken to have shown that justified true belief accounts of
knowledge, where the justification in question is fallibilistic, cannot be
correct.21 At a minimum, some other condition must be added to the
account to rule out accidentally true belief.
Various solutions to the Gettier problem have been proposed. They
include requiring that the subject’s justification not rely on any
falsehood;22 requiring that there not be any true proposition which, if
the subject were to believe it, would undermine the subject’s
6
justification for the belief in question;23 and requiring that there be
some causal or counterfactual relation between the belief and the fact
that the belief makes true.24 There are variations on each of these
strategies to responding to the Gettier problem, and objections and
counterexamples have been raised for each of them. It is safe to say, I
think, that none of these responses has been widely accepted.25
Perhaps as a result many philosophers have largely set it aside,
apparently reasoning that, since everyone faces the Gettier problem, it
doesn't weigh on them any more heavily than it does on anyone else.26
Others have become pessimistic about the prospects of finding a
solution at all and have embraced instead views that, they argue, are not
susceptible to problems with accidentality in the first place.27 Finally, it
has been argued that the Gettier problem cannot be solved and that
fallibilism will always bring with it a problematic kind of accidental
success. If this is so, fallibilism may permit an escape from traditional
skepticism only to fall prey to a new type of skeptical problem.28
4. THE LOTTERY PARADOX
In its initial form, the lottery paradox poses a problem for rational
belief.29 It depends on the following two principles:
(1) If it is highly probable that p, then it is rational to
believe that p.30
(2) If it is rational to believe that p and it is rational to
believe that q, then it is rational to believe that p & q.
Each principle is quite plausible on its own. If one’s goal is to have as
many true beliefs as possible while minimizing the risk of having false
beliefs, then one will obviously want to pursue the strategy of believing
highly probable propositions. And, if two propositions are each rational
to believe, then surely it is rational to believe their conjunction. After
all, one can be certain that the conjunction is entailed by what one is
already rational to believe; because one is certain of the entailment, it
seems as though adding the conjunction to one’s set of beliefs does not
bring with it any additional risk of error.
Nevertheless, the two principles together yield a counterintuitive
result. Suppose there is a fair lottery in which 1000 tickets are sold and
in which only one ticket will win. For each ticket, there is thus a .999
7
probability that it will lose. Principle (1) tells us that it is rational to
believe of each ticket that it will lose. So, where proposition pi is the
proposition that ticket ti will lose, it is rational to believe that p1, that p2,
…, that p1000. Principle (2) tells us that it is rational to believe the
conjunction of all these propositions: that p1 & p2 & … & p1000. But,
because we know it is a fair lottery, it is also rational for us to believe
that some one of the tickets will win—i.e., it is rational for us to believe
that either not-p1 or not-p2, …, or not-p1000. We know (and rationally
believe) that this is equivalent to the proposition that not-(p1 & p2 & …
& p1000). Using principle (2) again, it is rational to believe that p1 & p2 &
… & p1000 & not-(p1 & p2 & … & p1000). But that proposition, of course, is
a contradiction.
The above result is a paradox (in the strict sense of the term) if one
is also committed—as many philosophers are—to the view that it is
never rational to believe a contradiction.31 Others have argued that the
lesson we should draw from it (and from the preface paradox, as well) is
that it in fact is rational to have a belief set that is inconsistent.32
Whether or not this is a satisfactory solution to the original version of
the lottery paradox, we shall see that it does not work for a version of
the paradox framed in terms of knowledge.
Consider the following epistemic versions of the two principles
underlying the lottery paradox:
(1′) If it is both highly probable and true that p, then
(ceteris paribus) one knows that p.33
(2′) If one knows that p and one knows that q, then one
knows that p & q.34
These principles seem to be just as plausible as the principles governing
rational belief above. More to the point, the fallibilist is in no position to
reject (1′), if my characterization of fallibilism as probabilistic in nature
is correct. When a proposition is highly probable and true, it is hard to
see how the proposition could fail to be known. Such a failure could
happen only in the event that what makes the proposition justified for
the subject is somehow disconnected from what makes it true, as
happens in Gettier cases. But, supposing (in accordance with the ceteris
paribus clause) that there is no such disconnection, the subject will have
fallibilist
knowledge
when
she
believes
highly
probable,
true
8
propositions. Principle (2′) simply allows the subject to conjoin separate
bits of knowledge into one.35 If she knows them separately, it seems
obvious that she will still know them when she has put them together.
Consider, again, a fair lottery with 1000 tickets, of which only one
will be a winner. For each ticket, there is a .999 probability that it will
lose. Principle (1′) says that, where it is true that a ticket ti will lose, I
can know the proposition pi that it will lose. Suppose it is true that t1
will lose. My belief that p1 is then true and highly probable; moreover,
its truth does not seem to be accidental in the way that the justified true
beliefs in Gettier cases are.36 So, it looks as though I know that p1. By
parity of reasoning, the same should hold with respect to the other
tickets in the lottery, with the exception of the ticket tj that loses. In
that case, my belief that pj will be justified but false, and therefore it will
fail to be knowledge. Principle (2′) will allow me to conjoin all of my
separate bits of knowledge, but the conjunction I know to be true will
not include the proposition that pj. Hence, I cannot be said to know the
contradiction that all of the tickets will lose and one of them will win.
So, principles (1′) and (2′) do not lead to an epistemic version of the
paradox in the same way that the paradox for rational belief arose.37
Nevertheless, there is another way of showing how the two
epistemic principles above lead to a problematic result in lottery cases.38
Suppose that ticket 1000 will be the winning ticket of our fair, 1000ticket lottery. Suppose also that I consider the lottery tickets in a
methodical way, by deciding for each one in turn whether it will win or
lose. I recognize that t1 has a .999 probability of losing; it is also true
that it will lose. So, according to principle (1′), I know that p1. In the
same way, my consideration of whether t2 will lose allows me to know
that p2. Principle (2′) then allows me to conclude that p1 & p2. More
generally, as I continue considering the tickets one by one, principle (1′)
allows me to acquire new pieces of knowledge, and principle (2′) allows
me to add them to the conjunction I know to be true. Ultimately, I end
up knowing that p1 & p2 & … & p999. Before considering t1000, I remember
that one of the tickets will win. I know that t1 through t999 will lose, so
t1000 must be the winner. The belief I end up with is true, and I can see
that it is obviously entailed by things that I know, so it is surely justified
as well. But it does not seem to be knowledge.
Explaining what is defective about acquiring justified true beliefs
that fall short of knowledge in this way will require us to reject one of
9
the two principles underlying the knowledge version of the lottery
paradox. The most common response has been to deny or modify (1′).
So, for example, Dana Nelkin has argued that my belief that p1 fails to be
knowledge because the fact that makes it true does not bear a causal or
explanatory connection to the belief.39 Adapting a strategy from Gilbert
Harman, we might hold that (1′) should apply only to the subject’s
entire set of knowledge rather than to individual instances of
knowledge.40 Or, following John Hawthorne, we might take (1′) to fail in
lottery-like situations because they make error possibilities salient in a
way that undermines the subject’s ability to have knowledge.41
Each strategy will face difficulties of its own. For example, requiring
a causal or explanatory connection between the belief and the fact that
makes it true may mean that it is impossible for anyone to ever know
anything about the future, about general facts, or about abstract objects.
Applying principle (1′) only to the subject’s entire set of knowledge will
make fallibilism impossible since the probability of one’s body of
knowledge taken as a whole will surely be low. And, finally, it will be
difficult to say why the salience of error possibilities undermines
knowledge in lottery situations. If fallibilism is correct, there are error
possibilities for every instance of knowledge; if the ones that matter are
all and only the salient error possibilities, it will obviously be very
important to have an account of salience that is not merely ad hoc.
The other main option is to reject principle (2′).42 Doing so means
rejecting or modifying the idea that knowing two separate propositions
thereby permits one to know their conjunction. There is a clear fallibilist
rationale for this strategy: if knowledge is essentially probabilistic in
nature, then conjoining one’s knowledge will also compound the risk of
error. In response, some philosophers have objected that abandoning
(2′) leaves deductive reasoning no clear role to play in epistemology.43
Whatever response we make to the lottery paradox, then, it seems that
fallibilism will require some modification of our basic assumptions
governing knowledge.
5. CONCLUSION
Although many philosophers have come to regard fallibilism as the only
serious option in epistemology, there is still much work to be done in
understanding the nature of fallibilistic knowledge. Accepting fallibilism
may involve the rejection of some traditional ways of conceiving of
10
knowledge.
In addition to the problem it poses for closure (discussed in the
previous section), fallibilism also seems to conflict with the standard
conception of epistemic possibility. On the usual way of thinking, it is
epistemically possible (for S) that p just in case S does not know that
not-p. Equivalently, it is epistemically impossible (for S) that not-p just
in case S knows that p. But, if fallibilism is correct, it looks like a subject
can know a proposition while its contradictory remains possible—in
fact, the rough formulation of fallibilism says as much.44
A further problem for fallibilism arises from the fact that it seems to
license problematic assertions with forms like, “I know that p, but it
might be false that p” and “It is probable that p, and I know that p.”
Some
philosophers—e.g.,
David
Lewis—have
thought
these
so
problematic that they have regarded them as compelling reasons to favor
infallibilism.45 But the issues here are complex. There are similar forms
of assertion, also licensed by fallibilism, that seem not to be (as)
problematic: e.g., “I know that p, and yet it might be false that p.”46
There are also similar, problematic forms of assertion that are licensed
by every epistemological theory (fallibilist or infallibilist): “It is true that
p, but I know that p.” Reflection on a wide range of assertions may leave
us skeptical that we can recover much of epistemological significance
from attention to the ways we can and cannot talk about knowledge and
epistemic possibility.
Finally, and most fundamentally, the problem of accidentality
remains to be solved. Without a compelling solution to the Gettier
problem, the fallibilist approach to epistemology cannot ultimately be
considered a success.47
REFERENCES
Alston, William. 1992. “Infallibility,” in A Companion to
Epistemology, J. Dancy and E. Sosa (eds.), Oxford: Blackwell, p.
206.
Ayer, A.J. 1956. The Problem of Knowledge. London: Penguin.
BonJour, Laurence. 1998. In Defense of Pure Reason. Cambridge:
Cambridge University Press.
11
——. 1985. The Structure of Empirical Knowledge. Cambridge, MA:
Harvard University Press.
Chisholm, Roderick. 1989a. Theory of Knowledge,
Englewood Cliffs: NJ: Prentice Hall.
3rd.
ed.
——. 1989b. “Probability in the Theory of Knowledge,” in
Knowledge and Skepticism, M. Clay and K. Lehrer (eds.), Boulder,
CO: Westview, pp. 119-30.
——. 1957. Perceiving: A Philosophical Study. Ithaca, NY: Cornell
University Press.
Cicero. 2006. On Academic Scepticism, tr. by
Indianapolis: Hackett.
C. Brittain.
Cohen, Stewart. 1988. “How to Be a Fallibilist,” Philosophical
Perspectives 2: 91-123.
Conee, Earl, and Feldman, Richard. 2004. Evidentialism. Oxford:
Clarendon Press.
Dancy, Jonathan and Sosa, Ernest. 1992. A Companion to
Epistemology. Oxford: Blackwell.
DeRose, Keith. 1991. “Epistemic Possibilities,” Philosophical
Review 100: 581-605.
Descartes, René. 1984. The Philosophical Writings of Descartes, vol.
II, tr. by J. Cottingham, R. Stoothoff, & D. Murdoch. Cambridge:
Cambridge University Press.
Dodd, Dylan. Forthcoming. “Against Fallibilism,” Australasian
Journal of Philosophy.
——. 2010. “Confusion about
Attributions,” Synthese 172: 381-96.
Concessive
Knowledge
12
Dougherty, Trent, and Rysiew, Patrick. 2009. “Fallibilism,
Epistemic Possibility, and Concessive Knowledge Attributions,”
Philosophy and Phenomenological Research 78: 123-32.
Dretske, Fred. 1971. “Conclusive Reasons,” Australasian Journal of
Philosophy 49: 1-22.
Fantl, Jeremy, and Matthew McGrath. 2009. Knowledge in an
Uncertain World. Oxford: Oxford University Press.
Feldman, Richard. 2003. Epistemology. Upper Saddle River, NJ:
Prentice Hall.
——. 1974. “An Alleged Defect in Gettier Counter-Examples,”
Australasian Journal of Philosophy 52: 68-9.
Foley, Richard. 2009. “Beliefs, Degrees of Belief, and the Lockean
Thesis,” in Degrees of Belief, ed. by F. Huber and C. Schmidt-Petri,
Springer, pp. 37-47.
——. 1993. Working Without a Net. Oxford: Oxford University
Press.
——. 1987. The Theory of Epistemic Rationality. Cambridge, MA:
Harvard University Press.
Fumerton, Richard. 2006. Epistemology. Oxford: Blackwell.
——. 2004. “Epistemic Probability,” Philosophical Issues 14: 149164.
Gettier, Edmund. 1963. “Is Justified True Belief Knowledge?”
Analysis 23: 121-3.
Goldman, Alvin. 1992. Liaisons. Cambridge, MA: MIT Press.
——. 1979. “What Is Justified Belief?” in Justification and
13
Knowledge, ed. by G. Pappas, Dordrecht: Reidel, pp. 1-23.
Reprinted in Goldman (1992): 105-26.
——. 1976. “Discrimination and Perceptual Knowledge,” Journal
of Philosophy 73: 771-91. Reprinted in Goldman (1992): 85-103.
——. 1967. “A Causal Theory of Knowing,” Journal of Philosophy
64: 357-72. Reprinted in Goldman (1992): 69-83.
Greene, Richard and Balmert, N.A. 1997. “Two Notions of
Warrant and Plantinga’s Solution to the Gettier Problem,”
Analysis 57: 132-9.
Harman, Gilbert. 1973. Thought. Princeton, NJ: Princeton
University Press.
——. 1970. “Induction,” in Induction, Acceptance, and Rational Belief,
M. Swain (ed.), Dordrecht: Reidel, pp. 83-99.
Hawthorne, John. 2005. “The Case for Closure,” in Contemporary
Debates in Epistemology, M. Steup and E. Sosa (eds.), Oxford:
Blackwell, pp. 26-42.
——. 2004. Knowledge and Lotteries. Oxford: Clarendon Press.
Hetherington, Stephen. 1999. “Knowing Failably,” The Journal of
Philosophy 96: 565-587.
——. 1996. “Gettieristic Scepticism,” Australasian Journal of
Philosophy 74: 83-97.
Jeshion, Robin. 2000. “On the Obvious,”
Phenomenological Research 60: 333-355.
Philosophy and
Keynes, John Maynard. 1921. A Treatise on Probability. New York:
Macmillan.
Klein, Peter. 1985. “The Virtues of Inconsistency,” Monist 68:
14
105-35.
——. 1971. “A Proposed Definition of Propositional Knowledge,”
Journal of Philosophy 68: 471-82.
Kyburg, Henry. 2003. “Probability as a Guide in Life,” in
Probability is the Very Guide of Life, H. Kyburg and M. Thalos (eds.),
Chicago: Open Court, pp. 135-50.
——. 1971. “Epistemological Probability,” Synthese 23: 309-26.
——. 1970. “Conjunctivitis,” in Induction, Acceptance, and Rational
Belief, M. Swain (ed.), Dordrecht: Reidel, pp. 55-82.
——. 1961. Probability and the Logic of Rational Belief. Middletown,
CT: Wesleyan University Press.
Lackey, Jennifer. 2008. “What Luck Is Not,” Australasian Journal of
Philosophy 86: 255-67.
Lehrer, Keith. 1974. Knowledge. Oxford: Oxford University Press.
Lehrer, Keith, and Thomas Paxson. 1969. “Knowledge:
Undefeated Justified True Belief,” Journal of Philosophy 66: 225-37.
Lewis, C.I. 1952. “The Given Element in Empirical Knowledge,”
Philosophical Review 61: 168-175.
Lewis, David. 1996. “Elusive Knowledge,” Australasian Journal of
Philosophy 74: 549-67.
Makinson, D. C. 1965. “The Paradox of the Preface,” Analysis 25:
205-7.
Mellor, D.H. 2005. Probability: A Philosophical Introduction. London:
Routledge.
Merricks, Trenton. 1995. “Warrant Entails Truth,” Philosophy and
15
Phenomenological Research 55: 841-855.
Myers, Robert, and Kenneth Stern. 1973. “Knowledge Without
Paradox,” Journal of Philosophy 70: 147-60.
Nelkin, Dana. 2000. “The Lottery Paradox, Knowledge, and
Rationality,” Philosophical Review 109: 373-409.
Nozick, Robert. 1981. Philosophical Explanations. Cambridge, MA:
Harvard University Press.
Olin, Doris. 2003. Paradox. Montreal: McGill-Queen’s University
Press.
Plantinga, Alvin. 1993. Warrant and Proper Function. Oxford:
Oxford University Press.
Pollock, John. 1983. “Epistemology and Probability,” Synthese 55:
231-52.
Pritchard, Duncan. 2005. Epistemic Luck. Oxford: Clarendon Press.
Reed, Baron. Forthcoming-a. “Fallibilism, Epistemic Possibility,
and Epistemic Agency,” in Philosophical Issues (Epistemic Agency).
——. Forthcoming-b. “Knowledge, Doubt, and Circularity,”
Synthese.
——. 2010. “A Defense of Stable Invariantism,” Noûs 44: 224-44.
——. 2009. “A New Argument for Skepticism,” Philosophical
Studies 142: 91-104.
——. 2008. “Certainty,” The Stanford Encyclopedia of Philosophy (Fall
2008
Edition),
Edward
N.
Zalta (ed.),
URL
=
<http://plato.stanford.edu/archives/fall2008/entries/certainty/>
.
16
——. 2007. “The Long Road to Skepticism,” The Journal of
Philosophy 104: 236-62.
——. 2005. “Accidentally Factive Mental States,” Philosophy and
Phenomenological Research 71: 134-42.
——. 2002a. “The Stoics’ Account of the Cognitive Impression,”
Oxford Studies in Ancient Philosophy 23: 147-180.
——. 2002b. “How to Think about Fallibilism,” Philosophical
Studies 107: 143-157.
——. 2000. “Accidental Truth and Accidental Justification,” The
Philosophical Quarterly 50: 57-67.
Russell, Bertrand. 1948. Human Knowledge: Its Scope and Limits.
New York: Simon and Schuster.
Rysiew, Patrick. 2001. “The Context–Sensitivity of Knowledge
Attributions,” Noûs 35: 477–514.
Shope, Robert. 2002. “Conditions and Analyses of Knowing,” in
The Oxford Handbook of Epistemology, P. Moser (ed.), Oxford:
Oxford University Press, pp. 25-70.
——. 1983. The Analysis of Knowing. Princeton: Princeton
University Press.
Stanley, Jason. 2005. “Fallibilism and Concessive Knowledge
Attributions,” Analysis 65: 126-31.
Strawson, P.F. 1992. Analysis and Metaphysics. Oxford: Oxford
University Press.
Unger, Peter. 1975. Ignorance: A Case for Scepticism. Oxford:
Clarendon Press.
Vogel, Jonathan. 1992. “Lottery Paradox,” in A Companion to
17
Epistemology, J. Dancy and E. Sosa (eds.), Oxford: Blackwell, pp.
265-7.
Weatherson, Brian. 2003. “What Good Are Counterexamples?”
Philosophical Studies 115: 1-31.
Williams, Michael. 2001. “Contextualism, Externalism and
Epistemic Standards,” Philosophical Studies 103: 1-23.
——. 1999. “Skepticism,” in The Blackwell Guide to Epistemology, J.
Greco and E. Sosa (eds.), Oxford: Blackwell, pp. 35-69.
Williamson, Timothy. 2000. Knowledge and Its Limits. Oxford:
Oxford University Press.
Zagzebski, Linda. 1994. “The Inescapability of Gettier Problems,”
Philosophical Quarterly 44: 65-73.
NOTES
1
Infallible foundations have been conceived in various ways—e.g., by
the ancient Stoics as cognitive impressions, by Descartes as clear and
distinct perceptions, and by twentieth-century philosophers like C.I.
Lewis as “the given” element in experience (for more on these views,
see Reed (2002a), Descartes (1984), and Lewis (1970), respectively). It
should be noted that infallible foundations preclude error in a limited
way. For example, the clear and distinct perception that a circle is a
shape allows the subject to know, without possibility of error, that a
circle is a shape. But the subject might still combine this clear and
distinct perception with other beliefs that are not clearly and distinctly
perceived in such a way that a false belief is the result. Indeed, it is
because this is possible that the Stoics took only sages (who possess a
systematic body of cognitive impressions) to be free from error; see
Cicero (2006), p. 34. The rest of us are still prone to making mistakes,
despite the fact that we do have cognitive impressions. Descartes also
seems to be aware of the difference between limited certainty and
absolute certainty; see Reed (forthcoming-b).
18
2
I am using ‘grounding’ here in a broad sense, so as to include not only
support from reasons and evidence but also possession of so-called
externalist properties—such as reliability—as well. Other terms that can
be used in roughly the same way as ‘grounding’ in this sense are
‘justification’ and ‘warrant’. Fallibilist theories of justification, such that
one can have justification for the belief that p even in cases where it is
false that p, are relatively uncontroversial, given that one have
justification for a belief without the belief being justified simpliciter. But
it is worth noting that fallibilism about knowledge will entail
commitment to fallibilism about, not only justification, but justification
simpliciter.
3
This sort of fallibilistic response can be seen to follow in the wake of
the first systematic presentation of skepticism in the Hellenistic world;
see, e.g., Brittain’s introduction to Cicero (2006). For present-day
versions of this response, see: Cohen (1988); Williams (1999), p. 54;
and Feldman (2003), pp. 122-128. See Unger (1975) for a skeptical
argument that relies explicitly on an infallibilist conception of
knowledge.
4
For this argument, see, e.g., Strawson (1992), pp. 91-6.
5
And, even if I did know it in that sort of way when I first acquired the
belief, I do not now know it through its relation to the given. So, that
cannot be the explanation for my current knowledge.
6
For defenses of infallibilism—less than full-blooded, in some cases—
see Fumerton (2006), ch. 2; D. Lewis (1996); and Dodd (forthcoming).
7
Cohen (1988), p. 91. Michael Williams agrees: “We are all fallibilists
nowadays” (2001, p. 5).
8
See, e.g., the various papers in Goldman (1992) and in Conee and
Feldman (2004).
9
Throughout this section, I rely on Reed (2002b).
10
Here and throughout the paper, I am using the term ‘justification’ as a
placeholder. The reader can substitute for it evidential relations, modal
relations like sensitivity or safety, reliability properties, or whatever
features in her preferred account of knowledge. For variants on this
formulation of fallibilism, or the analogous account of infallibilism, see,
e.g., Ayer (1956), p. 54-56; BonJour (1985), p. 26; BonJour (1998), p.
16; Alston (1992); and Pritchard (2005), p. 17. See also Lehrer’s first
definition of ‘incorrigibility’, which is (in my terms) a definition of
19
infallibility (1974, p. 81).
11
The relevant sort of possibility here (and in the earlier formulation, in
which a belief “could have been mistaken”) is metaphysical or broadly
logical. This is why Dretske’s conclusive reasons view counts as a type of
fallibilism. Although he says, “If S has conclusive reasons for believing
P, then it is false to say that, given these grounds for belief, and the
circumstances in which these grounds served as the basis for his belief,
S might be mistaken about P” (1971, p. 13), the impossibility he has in
mind is physical or nomological in nature. In a similar way, David Lewis
requires that, for a subject to know that P, her evidence rule out, not
absolutely every possibility in which not-P, but just “every possibility in
which not-P—Psst!—except for those possibilities that conflict with our
proper presuppositions” (1996, p. 554). In other words, the subject’s
evidence must rule out every not-P possibility except for those that she
is entitled to ignore. Although Lewis presents this as a kind of
infallibilism, I think the view achieves the “impossibility of error” only
through a kind of creative accounting. (Compare: “Of course we turned
a profit last year. We brought in more than enough revenue to cover our
expenses—Psst!—except for those costs we’re ignoring.”) Because it is
not the subject’s evidence (or justification or reasons or whatever
provides her epistemic grounding) that eliminates all of the ways in
which her belief could (in the broadest sense) be false, her evidence
does not make her infallible with respect to the belief. (To be clear, this
is not meant as an objection to the views of either Lewis or Dretske.
Either view may be correct as an account of knowledge (or, in Lewis’s
case, of knowledge attributions); my point is merely that neither of
them is really a version of infallibilism.)
12
For some examples of this formulation of fallibilism, or the analogous
formulation of infallibilism, see Cohen (1988), p. 91; Merricks (1995),
p. 842; Jeshion (2000), pp. 334-335; Conee and Feldman (2004), ch. 12;
Stanley (2005), p. 127; and Dougherty and Rysiew (2009), p. 128.
13
This is a problem that has been recognized for some time; see, e.g.,
Lehrer (1974), pp. 82-83; Hetherington (1999), p. 565; Merricks (1995);
Reed (2002b); and Fumerton (2006), p. 60.
14
For this reason, entailment should not feature in accounts of
epistemic possibility, as it does in Stanley (2005) and Dougherty and
Rysiew (2009).
20
15
This case is derived from Russell (1948), p. 154.
16
See Reed (2002b), p. 151, for more on fallibilism and probability and
Reed (2000) for more on accidentality.
17
There are three clarifications worth noting. First, the probability
relation is not incompatible with there also being an entailment relation
between the subject’s justification and the belief it justifies. In fact,
there will be such an entailment when the subject’s belief is of a
necessary proposition. My point here is simply that the subject’s
justification will not supervene on that entailment. Second, if one’s
justification makes certain one’s belief, and certainty implies probability,
then it would seem that the beliefs one knows with certainty would be
instances of fallibilistic knowledge. To preclude cases of this sort, we
can understand fallibilistic knowledge to occur only in cases where one’s
justification makes the belief in question merely probable. (On this point,
I am grateful to an anonymous referee.) Third, in the relevant sense of
probability, it cannot be the case that necessary truths are assigned an
unconditional probability of 1; if they were, the problem with necessity
would arise again, for necessary truths would then have probability 1,
no matter what one’s evidence might be. For this reason, probability in
the epistemic sense cannot be unconditional in nature. Beliefs—even
beliefs about necessary propositions—can be epistemically probable only
relative to their grounding. (On this point, I am grateful to Sherri
Roush.) For more on fallibilism understood in terms of probability, see
Reed (2002) and Fantl and McGrath (2009), ch. 1.
18
This sort of view is grounded in Keynes (1921). See also Fumerton
(2004); Kyburg (2003) and (1971); and Chisholm (1989a, pp. 54-56 and
63-64) and (1989b).
19
For more on these interpretations of probability, see Russell (1948),
part five, and Mellor (2005). Goldman’s reliabilism is one prominent
example of a view that takes justification to be grounded in probability
as a measure of either actual or counterfactual frequencies; see his
(1979), pp. 114-115.
20
The problem is first explicitly stated in Getter (1963), but an earlier
example of this sort—the clock case mentioned above—can be found in
Russell (1948). See Shope (1983) for an account of the most influential
early responses to Gettier’s paper. See Reed (2000) for more on the
nature of epistemic accidentality. See Pritchard (2005) and Lackey
21
(2008) on the related phenomenon of epistemic luck.
21
Gettier’s specific targets are Chisholm (1957) and Ayer (1956), but
virtually all epistemologists in the years since his paper was published
have recognized the general significance of the problem of accidentality.
For a case of justified, accidentally true belief that is perhaps in some
ways different than Gettier’s original examples, see Ginet’s barn façade
case in Goldman (1976). For cases of beliefs that are both accidentally
true and accidentally justified, see Reed (2000).
22
See Myers and Stern (1973) and Armstrong (1973), p. 152. Harman
(1970) and (1973), pp. 120-4, argues that the rules of belief acceptance
cannot be probabilistic because, if they were, they would not allow this
sort of response to the Gettier problem. Feldman (1974) objects that
this sort of response will not work in all sorts of Gettier cases—there are
some that do not rely on false premises.
23
For this, so-called defeasibility account, see, e.g., Lehrer and Paxson
(1969) and Klein (1971). See Shope (1983), ch. 2, for objections.
24
See, e.g., Goldman (1967) and Nozick (1981). See Ginet’s barn façade
case in Goldman (1976) for an example of the sort of case that has
proven to be problematic for these externalist accounts of knowledge.
25
See Shope (1983) and (2002) for many of the proposed solutions and
proposed counterexamples.
26
I should also mention that a few philosophers have tried arguing that
Gettier cases are not really a serious problem. Thus, Hetherington
(1999) takes the subjects in Gettier cases to have borderline instances of
knowledge, and Weatherson (2003) argues that we might be better off
rejecting the intuitions underlying Gettier cases in favor of the simple
and intuitive theory they undermine.
27
Two examples of this strategy include Plantinga (1993), p. 48, and
Williamson (2000), pp. 4, 30. However, it has been objected that both
of their views do in fact face the problem of accidentality; see Greene
and Balmert (1997) and Reed (2005), respectively.
28
See Reed (2007) and (2009) and Zagzebski (1994). See Hetherington
(1996) for another way of linking the Gettier problem and skepticism.
29
The lottery paradox was first formulated by Henry Kyburg (1961, pp.
197-9). See also his presentation of it in Kyburg (1970). For other
helpful presentations of the lottery paradox, see Vogel (1992), Nelkin
(2000), and Olin (2003).
22
30
Some philosophers will prefer to talk, not about rational belief
outright, but rather about rational degree of belief. It is thought that
this will permit a solution to the lottery paradox because the degree of
belief it is rational to have will drop as the subject conjoins propositions
she individually has some higher degree of confidence in. See Foley
(2009) for discussion of this option and of what he calls the “Lockean
thesis,” which links rational degree of belief with rational outright
belief.
31
See, for example, Lehrer (1974), pp. 202-4.
32
It should be noted, however, that having a belief set with inconsistent
members is not necessarily the same as believing an outright
contradiction. On this point, see Kyburg (1970), p. 56-60, where he
draws a distinction between a weak and a strong principle of
consistency; both principles rule out believing contradictions, but the
former permits having beliefs that entail a contradiction (how this is
possible is explained below; see note 41). See also Klein (1985) and
Foley (1987), pp. 241-7, and (1993), pp. 162-73. The preface paradox
arises when an author, who is committed to the truth of every claim in
her book, nevertheless reflects that her fallibility makes it reasonable for
her to believe that at least one of those claims is false; see Makinson
(1965) and Olin (2003), ch. 4.
33
Could the knowledge form of the lottery paradox be avoided by
moving to a theory that is focused on degrees of knowledge rather than
knowledge outright? Although this strategy might plausibly work for
the belief form of the paradox, matters are complicated here by the fact
that there will have to be a minimal threshold below which one does not
possess knowledge of any degree. Given any plausible threshold for the
minimal degree of knowledge, it should be possible to formulate the
lottery paradox so that the subject meets that threshold with respect to
each of her beliefs about the losing tickets.
34
It may be necessary to add two clauses to this principle: (a) one
believes that p & q and (b) one does so because one knows that p and
one knows that q. These additions would prevent (a) cases of knowledge
without belief and (b) unrelated lucky guesses that p & q from counting
as knowledge. In what follows, though, I shall ignore these
complications.
35
(2′) is what has become known as an epistemic closure principle (in
23
this case, governing conjunction).
36
To secure this result, we will need to build into the case that my belief
has been acquired in the way specified by one’s favorite fallibilistic
theory—e.g., it is the product of reliable belief-producing processes, or it
is held on the basis of one’s evidence about the odds. When that is the
case, it will not typically be a coincidence that the belief is both true and
justified, as we saw happen in the case of Smith’s belief that either Jones
owns a Ford or Brown is in Barcelona.
37
See Nelkin (2000) for an attempt to construct a knowledge version of
the lottery paradox along these lines. Olin (2003), p. 204, correctly
points out that it will not work, given that one of the allegedly known
propositions is in fact false and thus fails to be knowledge.
38
39
See Reed (2010) and Hawthorne (2004).
Nelkin (2000), p. 390. See also Dretske (1971), pp. 3-4, who
disallows knowledge of lottery propositions because the probabilistic
basis for them does not make it impossible for the subject to be
mistaken in those circumstances. Nozick (1981) would disallow
knowledge of lottery propositions because they fail his sensitivity
condition, which holds that, if the proposition in question were false,
one wouldn’t believe it. In the case of a lottery proposition, one would
continue to believe (on merely probabilistic grounds) that the ticket will
lose even in the case in which it wins.
40
See Harman (1973), p. 119. Harman’s argument is framed in terms of
rational acceptance rather than knowledge.
41
Hawthorne (2004), pp. 160-2.
42
See Reed (2010) for the rejection of (2′). For the rejection of (2)—the
rational belief version of the principle—see Kyburg (1970); and Foley
(1987), p. 243, and (1993), pp. 162-73. As Kyburg notes, it is possible
to draw a distinction between it being rational to hold inconsistent
beliefs and it being rational to hold contradictory beliefs only if one
rejects (2).
43
See Pollock (1983). This issue also intersects with the recent debate
as to whether knowledge is deductively closed; see Hawthorne (2005)
for the case to be made in favor of closure.
44
For a solution to this problem, see Reed (2010) and (forthcoming-a).
45
See Lewis (1996) and Fantl and McGrath (2009) on the “madness” of
fallibilism. DeRose (1991) argues that assertions of the form, “I know
24
that p, but it might be false that p,” clash because the conjuncts are
logically inconsistent. For more on the debate over what have come to
be called ‘concessive knowledge attributions’, see Rysiew (2001),
Stanley (2005), Dougherty and Rysiew (2009), and Dodd (2010) and
(forthcoming). I am grateful to an anonymous referee for pressing this
point.
46
Notice that assertions of this type have the same logical form as
assertions of the first type mentioned above (“I know that p, but it
might be false that p”). The fact that the assertions of the one type
sound much more “clashy” than do assertions of the other type
indicates that the clash in question does not derive from any logical
contradiction.
47
For helpful comments on a draft of this paper, I am grateful to an
anonymous referee for Philosophy Compass, Sherri Roush, and, especially,
Jennifer Lackey.