Reverse Engineering Epistemic
Reverse Engineering Epistemic
Reverse Engineering Epistemic
Phenomenological Research
Philosophy and Phenomenological Research
Ó 2012 Philosophy and Phenomenological Research, LLC
* This paper won the Young Epistemologist Prize for the Rutgers Epistemology
Conference in 2011.
I pronounce it, when speaking English, like this: sinÆon dorÆuhÆmudgeÆuh.
1
The question of this paper thus contrasts with the timeworn questions of how to
give necessary and sufficient conditions for this or that philosophically interesting
property. There are, however, a few excellent philosophical explorations of the func-
tion of this or that philosophically interesting word or concept.
In philosophy of logic and language, Quine offered an elegantly simple insight
about the utility of the truth predicate which precipitated the contemporary devel-
opment of deflationism about truth; see Quine (1970), Leeds (1978), Horwich (1990/
98), and Field (1994).
In metaphysics, a series of recent papers on modality have broached the fasci-
nating issue of the function of modal concepts; see Kment (2006), chapter 5 of
Williamson (2007), and Divers (2010).
2 SINAN DOGRAMACI
there appears to be a range of cases that stand as stark and inexplica-
ble exceptions. These are cases where we criticize as irrational the
beliefs of someone we know is tracking the truth very well, and we
praise as rational the beliefs of someone we know is tracking the truth
very badly!
The clearest such cases are the old counterexamples to the simplest
and most intuitive versions of reliabilism. A simple reliabilist theory of
justification, such as Goldman (1979), is just the sort of theory we’d
have expected to be right if our evaluative practice is a means for
getting at the truth. But then BonJour (1980) gave us the unwittingly
reliable clairvoyant, Norman, whose beliefs we criticize, calling them
‘irrational’. The thing that’s puzzling me is, if we know Norman tracks
the truth so well, why on earth do we criticize his beliefs? What pur-
pose could it serve? That’s a puzzle case for anyone wondering what
function ‘rational’ serves.4
Some people hesitate to call Norman’s beliefs irrational.5 Perhaps
this compromises Norman’s status as a counterexample to reliabilism,
but it doesn’t remove the puzzle I’m concerned with. Why do so many
people feel any temptation at all to criticize the beliefs of someone
whom we know, by stipulation, enjoys an excellent, robustly reliable
connection to the truth? That’s a puzzle, as I said, for anyone wonder-
ing what function ‘rational’ serves.
The other classic counterexample to reliabilism also generates the
puzzle. Cohen (1984) pointed out that Dupe, the victim of a Cartesian
evil demon, uses an unreliable process to form perceptual beliefs, but
we apply ‘rational’ to Dupe’s beliefs. Why do we give epistemic praise
to someone we know to be so badly disconnected from the truth?
Cohen himself was gripped by a puzzle different from ours but closely
related. He wondered about, as he labeled it, ‘‘the truth-connection’’:
surely, Cohen said, rational beliefs are somehow connected with truth,
but if reliabilism is wrong, then what could that truth-connection be?
My puzzle is not in the first instance about the property of being rational,
but is rather about our use of the word ‘rational’ and similar terms of
epistemic evaluation. I’m puzzled by why we use these words in ways
that appear to be downright counterproductive(!) to our ultimate episte-
mic goal, namely believing just the truth (about topics of interest to us).
4
Goldman, BonJour and other contributors to this literature mostly used the term
‘justification’. Of course, they were explicitly using it to pick out a notion present in
ordinary thought and talk. As noted, I treat ‘justified’ and ‘rational’ as meaning
pretty much the same thing as far as ordinary conversation goes.
5
See, for example, Weinberg et al. (2001). As the authors emphasize, variation in
responses varies with cultural background. This important point has supportive
implications for the larger project advertised in the final section of this paper.
6
I hope it is clear why we cannot say our inferences are better than Rami’s because
we make ours with the recognition of their validity. Three familiar problems: (1)
Most people don’t give any thought to the validity of their inferences. (2) Between
Carroll (1895) and Boghossian (2003), it should be very clear that recognition of
validity doesn’t help to explain why our basic deductively valid inferential transi-
tions are rational. And (3) Rami is in as good a position as we are to argue, by giv-
ing a standard rule-circular soundness proof, that his basic transitions are valid.
4 SINAN DOGRAMACI
epistemically evaluative practice is just an instrumental utility it has by
helping us get true beliefs. I think there is such a solution to the puzzle,
and I will use the rest of this paper to sketch it.
The task is a one of reverse engineering. First, bracketing specula-
tion about what it is for, let’s look at what our epistemically evaluative
practice actually is, and what some of its significant effects on us are.
Once we have a picture in hand, we can try to come up with an expla-
nation of how this practice benefits us: what exactly is it for, and how
does it accomplish that? I’ll propose an answer that vindicates the intu-
ition that it is all ultimately for the sake of having true beliefs.
have the simplest logical form of any of the assertions we make using
‘rational’ or ‘irrational’. In attempting to reverse engineer a function
for epistemically evaluative language, the aspect of our overall use that
I will focus on is the use exemplified by the indented assertions above.
In other words, my focus is on our practice of calling beliefs rational
and, the more common case, calling beliefs irrational. I label this the
simple use of epistemically evaluative terms like ‘[ir]rational’.
(The simple use is not the only way we use ‘rational’. We embed
‘rational’ into non-indicative and/or logically complex structures. Also,
not only do we use the word in speech, we use the associated concepts
in thought, for instance if I were to believe that Smith’s belief is irratio-
nal, perhaps never making an assertion that expresses that belief. For
now, I am setting these uses aside, returning to them only at the very
end of the paper.)
In any instance of the simple use, like the indented claims above, the
surface form of the assertion indicates that the object of the evaluation
is a belief or a disbelief. However, two observations about ordinary dis-
course reveal that what is being implicitly evaluated are belief-forming
7
See, for example, Pollock and Cruz (1999), Goldman (1986, 2009), Wedgwood
(2002), Field (2000), Peacocke (2004), Boghossian (2008).
6 SINAN DOGRAMACI
Note also that some authors distinguish our basic (observational,
inductive, etc.) rules from non-basic rules, such as believing what the
Times reports. Whenever I refer to rules, I mean basic rules (unless I
explicitly describe them as non-basic rules).
So then, in a critical instance of the simple use, like the criticism of
Smith’s belief about Obama, the evaluator is (implicitly) criticizing
Smith’s failure to follow the correct rules. Maybe Smith accepts the
correct rules, but she failed on this occasion to follow them, i.e., she
committed a performance error, like what happens with syntax. More
likely in this case, Smith followed bad rules that take fear and prejudice
as bases for beliefs. (Instances of the simple use that involve epistemic
praise are slightly more complicated, since we call a belief ‘rational’
only if the belief was formed by correct rules and the believer had no
irrational beliefs in her basis. To avoid wordy formulations, and in any
case to stick with the more common instances of the simple use, I’ll
continue to focus on critical instances.)
Hare makes two claims here, both of which I claim are plausible when
they are transplanted into the epistemic domain. Let’s examine them.
The first claim is this: there’s a robust correlation between the rules a
speaker (implicitly) evaluates positively/negatively and the rules that she
8
See Hare (1998), section 2. My ellipsis deletes ‘(in the above broad sense)’. Hare
describes that as ‘a broad sense in which it covers Kant’s rational will and
Aristotle’s boulesis or rational desire.’
Nature, when she formed man for society, endowed him with an origi-
nal desire to please, and an original aversion to offend his brethren.
She taught him to feel pleasure in their favourable, and pain in their
unfavourable regard. She rendered their approbation most flattering
and most agreeable to him for its own sake; and their disapprobation
most mortifying and most offensive.9
The claim is consistent with the observation that, often, the person crit-
icized (or praised) by an assertion is not the audience, not the person
the assertion is made to. I may tell you that Smith’s belief is irrational,
9
See Smith (1759/1976), p.116.
8 SINAN DOGRAMACI
even though I believe or suspect that you follow all and only the cor-
rect rules. In such cases, my evaluation still influences my audience: it
serves to reinforce a good practice, as well as to encourage the prosely-
tization of third parties. Or I may tell you Smith’s belief is irrational,
even though I believe or suspect that you are equally guilty of Smith’s
mistakes. In such cases, my evaluation will be especially effective in
influencing my audience because I am being diplomatic! Indeed, it’s
plausible that such diplomatic third-personal evaluations are the most
common instances of the simple use.
The scope of the claim (that our evaluations tend to influence our
audience’s rule-following) is all our rules, down to our most basic.
There is no restriction, even if cognitive science finds that some rules,
e.g., very basic deductive or inductive rules, are innate.10 Two points
show why. First, what cognitive science could find, at best, is that, like
with syntax, we innately accept certain rules; our competence is innate.
But, our success in following those rules, our avoidance of performance
errors, can still be improved by the social influence of the evaluative
use. Consider, syntax is innate, but some professional athletes’ inter-
view transcripts read like a sputtering of barely interpretable fragments,
while some public speakers are trained to always speak in well-formed
sentences. Second, it’s likely that even our most deeply accepted rules
are subject to modification. After all, many promising solutions to the
semantic paradoxes call on us to revise one or another of our most
dearly held deductive rules of reasoning.11 And with induction, while it
may be hard to imagine actually adopting a ‘‘gruesome’’ rule, it’s easy
and common enough for us to revise the setting of the parameter for
the trade-off between speed and caution in our inductive rules. I criticize
Jones’s belief in a 9/11 conspiracy theory because she is being unrea-
sonably quick to draw conclusions from a small amount of data. And,
I criticize Brown’s global warming skepticism because he is being
unreasonably cautious when the data is overwhelming. The simple use
can bring people like Jones and Brown to settle the trade-off between
power and caution in a more reasonable, moderate way.12
An important point requires emphasis here: it’s not my view that we
criticize Smith, Jones and Brown as a way of reasoning with them (or
with our audience, if the audience is someone else). We cannot hope to
modify their basic rule-following behavior by providing them with new
10
See Carey (2009).
11
See Field (2008).
12
The inescapable trade-off between speed and power was something William James
was apparently aware of and moved by; see James (1897/1956). See Goldman
(1986) for a more contemporary discussion.
13
See Pronin et al. (2002).
10 SINAN DOGRAMACI
to help police our reasoning, we receive a more objective evaluation of
it.
Now, this view that evaluative language serves to promote coordi-
nation across the community is not entirely new. Gibbard (1990)
proposed that the function of evaluative discourse is to foster coor-
dination. Gibbard was, however, primarily focused on practical ratio-
nality; he spoke of coordination of actions and feelings, not belief-
forming rules.14 So, the question we still have to answer is: what
could this proposed coordination of belief-forming rules be for?
What we need now is an account of why we use evaluative language
to coordinate belief-forming rules. I offer my answer in the next
section.
3. Epistemic Communism
3.1. Coordination Makes Testimony Trustworthy
We each want true beliefs, and no false beliefs, about the topics that are
important to us. This requires collecting lots of evidence, evidence that
will serve as the bases for the beliefs we want to acquire. We each have
our individual faculties of evidence collection. Each individual’s
perceptual faculties give her experiences that can serve as the bases of
14
See especially chapter 4. For instance, here Gibbard says: ‘Shared evaluation is
central to human life, I suggest, because it serves biological functions of rehearsal
and coordination.’ (p.72) Also: ‘Here, then, in brief, is the proposal. Normative dis-
cussion might coordinate acts and feeIings if two things hold. First, normative dis-
cussion tends toward consensus. The mechanisms here, I shall propose, are two:
mutual influence, and a responsiveness to demands for consistency. Second, the
consensus must move people to do or feel accordingly.’ (p.73)
Also see chapter 12, specifically pp.223–6, for good summaries of the relevant
theme of the book. Also see the rest of pp.72–3, where Gibbard separates out two
aspects of normative discourse paralleling the two I used the Hare quote to intro-
duce. And see pp.77-8, where Gibbard emphasizes a point I agreed with earlier,
namely that the connections between evaluation and practice, though robust, may
not be necessary.
A very significant difference I have with Gibbard is that he ends up invoking
the coordinative effect to argue for the anti-realist conclusion that there are no
moral facts. See, especially, chapter 6, for the full presentation of his argument.
Much later, in Gibbard (2003), he backed away from his anti-realist views, saying
that it is difficult to see how to coherently draw the realist/anti-realist distinction.
(See, e.g., p.x of the preface.) I’m sympathetic to the view that there is no good
way to draw a coherent distinction (such that neither side becomes a non-starter).
Field (2009) outlines a view of the evaluative role of the linguistic and psycho-
logical use of ‘rational’ very much like the view in Gibbard (1990), and Field does
focus on the epistemic domain. Field, though, does not discuss coordination, and
goes into few details about how use of the word/concept results in its performing
any function for us. The most he says about function is on p.286, where he says
the function of normative discourse ‘is to give advice, to oneself and others’. I
certainly agree with that much.
15
Thanks to Karl Schafer for the apt label ‘epistemic surrogate’.
12 SINAN DOGRAMACI
Notice how this form of communism, unlike the political version,
gives rise to a highly efficient (epistemic) economy. There is a division of
epistemic labor, the labor of gathering evidence and forming beliefs
based on it. As we labor, we each do not need to store all our evidence.16
This is because, if you present me with testimony but not your evidence,
I can still trust that, whatever your evidence was, you formed the same
belief that I would on the basis of it.17 Furthermore, not only can I make
use of your unstored evidence, I can make use of your computational
resources for reasoning. I can’t come up with and consider every explan-
atory hypothesis. I can’t go through every chain of deductive reasoning.
But, if you come up with a powerful explanatory hypothesis (for some
known data), or infer some deductive result (from some known pre-
mises), I will trust the belief you form, since it is the same belief I would
myself have formed if I’d been able to devote enough computational
resources to the task. Our system even allows us to exploit the differences
among people’s epistemic strengths. People vary with how fast they can
reason, and with how creative they are at coming up with explanatory
hypotheses. We can thus efficiently divide the intellectual labor according
to principles of comparative advantage. (Or, to sound more communist:
from each according to her epistemic ability!)
16
Harman (1986), chapter 4, made the point that we do not and should not store all
our evidence.
17
To be sure, we should not, and of course we in fact do not, discard all our
evidence. Storing evidence may be costly, but it will always have some epistemic
value. This is because, interestingly, there can be cases where two items of evidence,
E1 and E2, individually confirm beliefs B1 and B2, respectively, even though their
conjunction, E1 & E2, disconfirms the conjunction B1 & B2. In these cases, our
cost-saving engineering trick can fail. Example: the early polls confirmed that a
woman will be nominated. The later polls confirmed that a black person will be
nominated. But, the whole batch of polling data did not confirm that a black
woman will be nominated. In such situations, it benefits us to keep track of more
information than just that E1 confirmed B1. In the example, we want to keep track
of the fact that the early evidence confirmed that Hillary Clinton will be nomi-
nated, while the later evidence confirmed that Barack Obama will be nominated.
These kinds of cases illustrate the value of storing more data.
So, sharing rules only allows us to grow the pool of communal knowledge while
storing vastly less information than we would otherwise need to. Again, sharing
rules is what enables me to share your beliefs without sharing all your evidence.
14 SINAN DOGRAMACI
in the movie A Beautiful Mind. He applies correct perceptual and
deductive belief-forming rules, indeed applies them ingeniously, to
highly misleading inputs about spy codes he sees around him, but
which do not really exist. Thus, we praise Nash’s rule-following, but
must correct his evidence.
18
See references in footnote 1.
19
Many theories compete to say what epistemic rationality holds in virtue of, includ-
ing reliabilism (e.g. Goldman (1979), developed in Goldman (1986)), conservatism
(e.g. Harman (1986) and Huemer (2007)), metasemantic accounts (e.g. Peacocke
(2004) and Boghossian (2003)), and pragmatic accounts of a Reichenbachian sort
(e.g. Enoch and Schechter (2008), Wright (2004), and Wedgwood (2011)).
As every theory so far proposed faces objections, it makes sense to ask whether
there was ever reason to expect that a successful theory exists.
References
Boghossian, Paul (2003). ‘‘Blind Reasoning.’’ Proceedings of the Aristo-
telian Society Supplementary Volume 77(1): 225–248.
—— (2008). ‘‘Epistemic Rules.’’ The Journal of Philosophy 105(9): 472–
500.
BonJour, Laurence (1980). ‘‘Externalist Theories of Empirical Knowl-
edge.’’ Midwest Studies in Philosophy 5: 135–150.
Carey, Susan (2009). The Origin of Concepts. Oxford: Oxford Univer-
sity Press.
Carroll, Lewis (1895). ‘‘What the Tortoise Said to Achilles.’’ Mind
4(14): 278–280.
Cohen, Stewart (1984). ‘‘Justification and Truth.’’ Philosophical Studies
46(3): 279–295.
Craig, Edward (1990). Knowledge and the State of Nature. Oxford:
Oxford University Press.
Divers, John. (2010). ‘‘Modal Commitments.‘‘ In Bob Hale and Aviv
Hoffman (eds.), Modality: Metaphysics, Logic and Epistemology,
189–219. Oxford: Oxford University Press.
Enoch, David and Joshua Schechter, (2008). ‘‘How Are Basic Belief-
Forming Methods Justified?.’’ Philosophy and Phenomenological
Research 76(3): 547–579.
Field, Hartry (1994). ‘‘Deflationist Views of Meaning and Content.’’
Mind 103(411): 249–285.
—— (2000). ‘‘A Priority as an Evaluative Notion.’’ In Paul Boghossian
and Christopher Peacocke (eds.), New Essay on the A Priori.
Oxford: Oxford University Press.
—— (2008). Saving Truth from Paradox. Oxford: Oxford University
Press.
20
For their immense help over the course of the preparation of this paper, I’d like to
thank Ray Buchanan, Hartry Field, John Morrison, Adam Pautz, Jim Pryor, Karl
Schafer, Jonathan Dancy and participants in his spring 2010 graduate seminar at
UT Austin, especially David Enoch and Clayton Littlejohn, participants in the phi-
losophy faculty reading group at UT Austin, and an audience at the 2011 Rutgers
Epistemology Conference.
16 SINAN DOGRAMACI
—— (2009). ‘‘Epistemology without Metaphysics.’’ Philosophical Stud-
ies 143(2): 249–290.
Gibbard, Allan (1990). Wise Choices, Apt Feelings. Cambridge, MA:
Harvard University Press.
—— (2003). Thinking How to Live. Cambridge,MA: Harvard Univer-
sity Press.
Goldman, Alvin (1979). ‘‘What is Justified Belief?‘‘ In George Pappas
(ed.), Justification and Knowledge, 1–23. Dordrecth: Reidel.
—— (1986). Epistemology and Cognition. Cambridge, MA: Harvard
University Press.
—— (2009). ‘‘Internalism, Externalism, and the Architecture of Justifi-
cation.’’ The Journal of Philosophy 106(6).
Hare, R. M. (1998). ‘‘Prescriptivism.’’ In Edward Craig (ed.), Routledge
Encyclopedia of Philosophy. London: Routledge.
Harman, Gilbert (1986). Change in View. Cambridge, MA: MIT Press.
Horwich, Paul (1990/98). Truth. 2nd edition. Oxford: Oxford Univer-
sity Press.
Huemer, Michael (2007). ‘‘Compassionate Phenomenal Conservatism.’’
Philosophy and Phenomenological Research 74: 30–55.
James, William. (1897/1956). ‘‘The Will to Believe.‘‘ In The Will to
Believe and Other Essays in Popular Philosophy. New York: Dover.
Kelly, Thomas (2003). ‘‘Epistemic Rationality as Instrumental Ratio-
nality A Critique.’’ Philosophy and Phenomenological Research
66(3): 612–640.
Kment, Boris (2006). ‘‘Counterfactuals and the Analysis of Necessity.’’
Philosophical Perspectives 20(1): 237–302.
Leeds, Stephen (1978). ‘‘Theories of Truth and Reference.’’ Erkenntnis
13(1): 111–129.
Peacocke, Christopher (2004). The Realm of Reason. Oxford: Oxford
University Press.
Pollock, John and Joseph Cruz, (1999). Contemporary Theories of
Knowl edge. Lanham, Maryland: Rowman & Little field.
Pronin, Emily, Daniel Lin and Lee Ross (2002). ‘‘The Bias Blind Spot:
Perceptions of Bias in Self Versus Others.’’ Personality and Social
Psychology Bulletin 28(3): 369–381.
Quine, Willard Van Orman (1970). Philosophy of Logic. Cambridge,
MA: Harvard University Press.
Reynolds, Steven (2002). ‘‘Testimony, Knowledge, and Epistemic
Goals.’’ Philosophical Studies 110(2): 139–161.
Smith, Adam (1759/1976). The Theory of Moral Sentiments. Oxford:
Oxford University Press.
Wedgwood, Ralph (2002). ‘‘Internalism Explained.’’ Philosophy and
Phenomenological Research 65(2): 349–369.
18 SINAN DOGRAMACI