Values in Science beyond
Underdetermination and Inductive Risk
Matthew J. Brown*y
Proponents of the value ladenness of science rely primarily on arguments from underdetermination or inductive risk, which share the premise that we should only consider
values where the evidence runs out or leaves uncertainty; they adopt a criterion of lexical priority of evidence over values. The motivation behind lexical priority is to avoid
reaching conclusions on the basis of wishful thinking rather than good evidence. This is
a real concern, however, that giving lexical priority to evidential considerations over
values is a mistake and unnecessary for avoiding the wishful thinking. Values have a
deeper role to play in science.
1. Introduction. This article contributes to the project of understanding
the structure of values in science. This is distinct from strategic arguments
that try to establish that science is value laden while assuming premises of
the defenders of the value-free ideal of science. It is becoming increasingly
hard to deny that values play a role in scientific practice—specifically nonepistemic, noncognitive, or contextual values ðe.g., moral, political, and aesthetic values; I will use the term “social values” to refer to such values in
generalÞ. What is less clear is where scientific practice requires values or
value judgments. This is not primarily a historical or sociological question,
although historical and sociological data are frequently brought to bear. Ultimately it is a normative question about the role that value judgments ought
to play in science, how they can and should contribute to scientific inquiry.
*To contact the author, please write to: University of Texas at Dallas, 800 W. Campbell Road,
JO31, Richardson, TX 75080; e-mail:
[email protected].
yI would like to thank Ken Williford and the University of Texas at Arlington Philosophy
Department; Martin Carrier, Don Howard, and the Center for Interdisciplinary Research
ðZiFÞ at Bielefeld Universität; and Andrea Woody and the PSA for opportunities to present
these ideas, as well as the audiences at those talks for their valuable feedback. I would especially like to thank Heather Douglas, Kevin Elliott, Kristen Intemann, Dan Hicks, and
Philip Kitcher for their feedback and encouragement on this article.
Philosophy of Science, 80 (December 2013) pp. 829–839. 0031-8248/2013/8005-0008$10.00
Copyright 2013 by the Philosophy of Science Association. All rights reserved.
829
830
MATTHEW J. BROWN
As such, we must consider both ethical questions about how the responsible
conduct of science requires value judgment and epistemological questions
about how the objectivity and reliability of science is to be preserved.
There are a number of phases of inquiry where values might play a role:
in determining the value of science itself and the research agenda to be pursued, in framing the problem under investigation and the method of data
collection, in choosing the hypothesis to propose, in the testing or certification of a proposed solution, and in choices about application and dissemination of results. Various accounts have allowed values in some stages, while
excluding them in others, or have argued for specific limits on the role for
values at each stage. In this article, I will focus on the testing phase, where
theories are compared with evidence and certified ðor notÞ as scientific knowledge, as this is the most central arena for discussion of value-free versus valueladen science. Traditionally, philosophers of science have accepted a role for
values in practice because it could be marginalized into the “context of discovery,” while the “context of justification” could be treated as epistemically
pure. Once we turn from the logical context of justification to the actual context
of certification in practice, the testing of hypotheses within concrete inquiries
conducted by particular scientists, we can no longer ignore the role of value
judgments.1
There are two main arguments in the literature for this claim: the error
argument from inductive risk and the gap argument from the underdetermination of theory by evidence. While both arguments have historically been
important and have established important roles for values in science, they
share a flawed premise, the lexical priority of evidence over values.2 While
this premise serves the important aim of avoiding the problem of wishful thinking, I argue that there are several problems with this premise. We
should seek an alternative ideal for science that provides a deeper and
broader role for values but nevertheless preserves an important feature of
science: the ability to surprise us with new information beyond or contrary
to our hopes and expectations.
2. Underdetermination: The Gap Argument. Underdetermination arguments for the value ladenness of science extend Duhem’s and Quine’s
thoughts about testing hypotheses. The starting point for this argument may
be the so-called Duhem-Quine Thesis ðor Duhem-Neurath-Quine; Rutte
1991, 87Þ that no hypothesis can be tested in isolation because auxiliary
assumptions are needed for theories to generate testable hypotheses. There
1. I use “context of certification,” following Kitcher ð2011Þ, as referring to actual practices of acceptance.
2. Strictly speaking, both arguments can be taken as strategic arguments, compatible
with any positive approach to the role of values in scientific inquiry. For the purposes of
this article, I will instead take the arguments as attempts to articulate a positive ideal.
BEYOND UNDERDETERMINATION AND INDUCTIVE RISK
831
exists what Helen Longino calls a “semantic gap” between theory and evidence ðLongino 2002, 2004, 2008Þ. This is generally taken to imply that no
theory can be definitively falsified by evidence, as the choice among rejecting the theory, altering the background assumptions, or even ðalthough more
controversiallyÞ rejecting the new evidence itself is underdetermined by
each new item of evidence—call this “holist underdetermination” ðStanford
2009Þ. Another form of underdetermination—“contrastive underdetermination” ðStanford 2009Þ—depends on the choice between identically confirmed rival hypotheses. As all of the evidence available equally supports
either hypothesis in such cases, that choice is underdetermined by the evidence.
If the evidence we are talking about is just all the evidence we have available to us at present, then we have transient underdetermination, which
might be relatively temporary or might be a recurrent problem. If instead
the choice is underdetermined by all possible evidence, we have permanent
underdetermination, and the competing theories or hypotheses are empirically equivalent. The global underdetermination thesis holds that permanent underdetermination is ubiquitous in science, applying to all theories and
hypotheses.3
The many forms of underdetermination arguments have in common the
idea that some form of gap exists between theory and observation. Feminists, pragmatists, and others have sought to fill that gap with social values
or to argue that doing so does not violate rational prescriptions on scientific inference. Call this the gap argument for value-laden science ðIntemann 2005; Elliott 2011Þ. Kitcher ð2001Þ has argued that permanent or
global underdetermination is needed to defeat the value-free ideal of science, and these forms of underdetermination are much more controversial.
Transient underdetermination, however, is “familiar and unthreatening,”
even “mundane” ð30–31Þ.
Kitcher is wrong on this point; transient underdetermination is sufficient
to establish the value ladenness of scientific practice ðBiddle 2013Þ. What
matters are decisions made in practice by scientists, and in many areas of
cutting-edge and policy-relevant science, transient underdetermination is
pervasive. Perhaps it is the case that in the long run of science ðin an imagined Peircean “end of inquiry”Þ all value judgments would wash out. But
as the cliché goes, in the long run we are all dead; for the purposes of this
discussion, what we are concerned with are decisions made now, in the actual course of scientific practices, when the decision to accept or reject a
hypothesis has pressing consequences. In such cases, we cannot wait for
the end of inquiry for scientists to accept or reject a hypothesis, we cannot
depend on anyone else to do it, and we must contend with uncertainty and
3. For discussion of forms of underdetermination, see Kitcher ð2001Þ, Magnus ð2003Þ,
Intemann ð2005Þ, Stanford ð2009Þ, and Biddle ð2013Þ.
832
MATTHEW J. BROWN
underdetermination ðsee Elliott 2011, 62–64Þ. Actual scientific practice supports this—scientists find themselves in the business of accepting and rejecting hypotheses in such conditions.
So what is the role for social values under conditions of transient underdetermination? Once the evidence is in, a gap remains in definitively determining how it bears on the hypothesis ðholist caseÞ or which competing
hypothesis to accept ðcontrastive caseÞ. In this case, it can be legitimate to fill
the gap with social values. Among the competing hypotheses still compatible
with all the evidence, one might accept the one whose acceptance is likely to
do the most good or the least harm. In social science work involving race,
this might be the hypothesis most conducive to racial equality. One might do
this in a more nuanced way, mediating the decision via appropriate auxiliary
assumptions or constitutive values.
A common response is that despite the existence of the gap, we should
ensure that no social values enter into decisions about how to make the underdetermined choice ðe.g., whether to accept a hypothesisÞ. Instead, we might
fill the gap with more complex inferential criteria ðNorton 2008Þ or with socalled epistemic or cognitive values ðKuhn 1977; McMullin 1983Þ. Proponents of the gap argument have argued that this at best pushes the question
back one level, as choices of epistemic criteria or cognitive values ðLongino
2002, 185Þ and the application of cognitive values may not be entirely determinate ðKuhn 1977Þ. Ensuring that no values actually enter into decisions to accept or reject hypotheses under conditions of transient underdetermination may turn out to be impossible ðBiddle 2013Þ. Another attempt
to avoid a role for social value judgments—withholding judgment until
transient underdetermination can be overcome or resolved by application
of cognitive factors alone—is unreasonable or irresponsible in many cases,
for example, when urgent action requires commitment to one or another option ðBiddle 2013Þ.4
What distinguishes legitimate from illegitimate uses of values to fill the
gap is a matter of controversy, sometimes left unspecified. With some exceptions,5 underdeterminationists insist that values only come into play in filling
the gap ðe.g., Longino 1990, 52; 2002, 127; Kourany 2003, 10Þ.
3. Inductive Risk: The Error Argument. While underdeterminationist
arguments for values in science are probably more well known, and may
have a history going back to Neurath’s early work ðHoward 2006Þ, the induc4. Proponents of the error argument make a similar point.
5. These exceptions either use a somewhat different sort of appeal to underdetermination
than the gap argument, or they use the gap argument as a strategic argument. One example
is the extension of the Quinean web of belief to include value judgments ðNelson 1990Þ;
cf. n. 3 and sec. 7.
BEYOND UNDERDETERMINATION AND INDUCTIVE RISK
833
tive risk argument for values in science is older still, going back to William
James ð1896Þ.6 Heather Douglas has revived Rudner’s ð1953Þ and Hempel’s
ð1965Þ version of the argument for the value ladenness of science, which basically goes as follows.
In accepting or rejecting hypotheses, scientists can never have complete
certainty that they are making the right choice—uncertainty is endemic to
ampliative inference. So, inquirers must decide whether there is enough evidence to accept or reject the hypothesis. What counts as enough should be
determined by how important the question is, that is, the seriousness of making a mistake. That importance or seriousness is generally ðin partÞ an ethical
question, dependent on the ethical evaluation of the consequences of error.
Call this argument for the use of value judgments in science from the existence of inductive risk the error argument ðElliott 2011Þ.
According to the error argument, the main role for values in certification
of scientific hypotheses has to do with how much uncertainty to accept, or
how strict to make your standards for acceptance. In statistical contexts, we
can think of this as the trade-off between type I and type II error. With a
fixed sample size ðand assuming no control over the effect sizeÞ, the only
way we can decrease the probability that we wrongly reject the null hypothesis is to increase the probability that we wrongly fail to reject the null
hypothesis, and vice versa. Suppose we are looking for a causal link between
a certain chemical compound and liver cancers in rats, and you take H0 to be
no link whatsoever.7 If you want to be absolutely sure that you do not say
that the chemical is safe when it in fact is not ðbecause you value safety,
precaution, welfare of potential third partiesÞ, you should decrease your
rate of type II errors and thus increase your statistical significance factor
and your rate of type I errors. If you want to avoid “crying wolf ” and asserting a link where none exists ðbecause you value economic benefits that
come with avoiding overregulationÞ, you should do the reverse.
Douglas emphasizes at length that values should never be taken as reasons for accepting or rejecting a hypothesis, reasons on a par with or having
the same sort of role as evidence in testing. This is an impermissible direct
role for values. In their permissible indirect role, values help determine the
rules of scientific method, for example, decisions about how many false positives or false negatives to accept. Values are not reasons guiding belief or
acceptance; they instead guide decisions about how to manage uncertainty.8
6. This point is due to P. D. Magnus ð2013, 845Þ, who refers to the inductive risk argument as the “James-Rudner-Douglas or JRD thesis” for reasons that will become immediately apparent.
7. Douglas ð2000Þ considers the actual research on this link with dioxin.
8. In Toulmin’s ð1958Þ terms, values cannot work as grounds or warrants for claims, but
they can work as backing for warrants that connect grounds and claims.
834
MATTHEW J. BROWN
Rudner ð1953Þ anticipated the ðcommonÞ objection that scientists should
not be in the business of accepting or rejecting hypotheses but rather just
indicating their probability ðand thus not having to make the decisions described aboveÞ. This gambit fails for several reasons: because there are inductive risks in phases of inquiry before final certification and because probabilistic hypotheses are just as open to inductive risks as others. Furthermore,
the pragmatic signal that accompanies a refusal to assent or deny a claim in
practical or policy circumstances may be that the claim is far more questionable that the probabilities support. Simply ignoring the consequences of error—by refusing to accept or reject, by relying only on cognitive values, or
by choosing purely conventional levels for error—may be irresponsible, as
scientists like anyone else have the moral responsibility to consider the foreseeable consequences of their action.
4. A Shared Premise. These two arguments against the value freedom of
science share a common premise. The gap argument holds that values can
play a role in the space fixed by the evidence; if the gap narrows ðas with
transient underdeterminationÞ, there are fewer ways in which values can play
a role, and if the gap could ever close, the conclusion would be value free.9
The inductive risk argument allows values to play a role in decisions about
how to manage uncertainty, not directly by telling us which option to pick
but indirectly in determining how much uncertainty is acceptable.
Both arguments begin from a situation where the evidence is fixed and
take values to play a role in the space that is left over. The reason that values
must play a role is that uncertainty remains once the evidence is in. In a relatively weak version of this argument, social values fill in the space between
evidence and theory because something has to, so it might as well be ðand
often isÞ social values. In more sophisticated versions, we must use social
values to fill the gap because of our general moral obligation to consider the
foreseeable consequences of our actions, including the action of accepting
a hypothesis. The arguments of these two general forms all assume the lexical priority of evidence over values. The premise of lexical priority guarantees that even in value-laden science, values do not compete with evidence
when the two conflict. This is often defended as an important guarantor of
the objectivity or reliability of the science in question.
5. Why Priority? Why do proponents of value-laden science tend to be
attracted to such a strict priority of evidence over values? Perhaps some
such restriction is required in order to guarantee the objectivity of science.
In order for our science to be as objective as possible, maybe it has to be as
value free as possible ðalthough this may not be very value free at allÞ. That
9. Unless the view is supplemented by a form of holism more radical than Quine’s.
BEYOND UNDERDETERMINATION AND INDUCTIVE RISK
835
is, we want as much as possible to base our science on the evidence because
evidence lends objectivity and values detract from it. This would be a problematic justification for opponents of the value-free ideal of science to adopt.
With the gap and inductive risk arguments, they mean to show that values and
objectivity are not in conflict as such. It would thus create a serious tension in
their view if one premise depended on that conflict. If it is really objectivity
that is at stake in adopting lexical priority, we need a more nuanced approach.
The main concern is that value judgments might “drive inquiry to a predetermined conclusion” ðAnderson 2004, 11Þ, that inquirers might rig the
game in favor of their preferred values. As Douglas ð2009Þ puts it, “Values
are not evidence; wishing does not make it so” ð87Þ. In other words, a core
value of science is its ability to surprise us, to force us to revise our thinking. Call the threat of values interfering with this process the problem of
wishful thinking.
Lexical priority avoids this problem insofar as what we value ðwhich involves the way we desire the world to beÞ is only a consideration after we take
all of the evidence ðwhich fixes the way the world isÞ into account. In Douglas’s more nuanced approach, even once the evidence is in, social values ðand
even most cognitive valuesÞ are not allowed to be taken directly as reasons
to believe anything; they only act as reasons for accepting a certain amount
of evidence as “enough.”
An alternative explanation may be that the adoption of lexical priority
has rhetorical value.10 Suppose, along with the defenders of the value-free
ideal, that there is such a thing as objective evidence, which constrains belief. Even so, there is ðat least transientÞ underdetermination and a gap that
must bridged by social values. Such an argument can undermine the valuefree ideal and establish that there is a major role for values in science, but
as we turn instead to the positive project of determining more precisely the
roles of values in science, the premises of such an immanent critique are unfit ground for further development. We no longer need to take the premises
of our opponents on board, and we may find that they lead us astray.
While following the basic contours of my argument so far, one might
object to characterizing of evidence as “prior” to values.11 What the gap and
inductive risk arguments purport to show is that there is always some uncertainty in scientific inference, and so there will always be value judgments
to be made about when we have enough evidence or which among equally
supported hypotheses we wish to accept, and so on. The pervasive need for
such judgments means that value freedom does not even make sense as a
limiting case; both values and evidence play a role, and neither is before the
10. My thanks to Don Howard for proposing this alternative interpretation.
11. My thanks to P. D. Magnus for bringing this objection to my attention.
836
MATTHEW J. BROWN
other. This mistakes the sense of “priority” at work, however. Where priority matters is what happens when values and evidence conflict; in such circumstances, lexical priority means that evidence will always trump values.
6. Problems with Priority. The versions of the gap and inductive risk arguments that presuppose the lexical priority of evidence make two related
mistakes. First, they require a relatively uncritical stance toward the status
of evidence within the context of certification, relative to values.12 The lexical priority principle assumes that in testing we ask, given the evidence, what
should we make of our hypothesis? Framed this way, values only play a role
at the margins. This is a mistake since evidence can turn out to be bad in
many ways: unreliable, unrepresentative, noisy, laden with unsuitable concepts and interpretations, or irrelevant for the question at hand. More importantly, we may be unaware of why the evidence is bad; it took a great deal
of ingenuity on the part of Galileo to show why the tower experiment did not
refute Copernicus, and it took much longer to deal with the problem of the
“missing” stellar parallax. While some epistemologists stick to an abstract
conception of evidence according to which evidence is itself unquestionable,
philosophers of science recognize that we can be skeptical about particular
pieces or sets of evidence based on their clash with hypotheses, theories, or
background assumptions that we have other good reasons to hold. As critics
of strict falsificationism and empiricism have shown, we already have reason to adopt a more egalitarian account of the process of testing, independent of the question about the role of values. On such a picture, hypotheses
and putative evidence are treated more on a level in processes of certification.
Second, the attitude about values that lexical priority takes reduces the
idea of value judgment to merely expression of preferences rather than judgment properly so-called—in effect, they deny that we can have good reasons
for our value judgments, or at least, they hold that they are systematically
less reasonable. It is crucial to distinguish between preferences or valuings
and value judgments or evaluations ðDewey 1915, 1939; Welchman 2002;
Anderson 2010Þ. Valuing may be the expression of a preference, but value
judgments are reflective decisions about what to value and are better and
worse on the basis of reasons. Value judgments may even be open to empirical test because they hypothesize relationships between a state or course
of action to prefer and pursue and the desirability or value of the consequences of pursuing and attaining them ðDewey 1915; Anderson 2010Þ.
“Roughly speaking, a value judgment hypothesizes ‘try it, you’ll like it’”—
12. As Douglas ð2009Þ makes clear, she does not take the status of evidence as unproblematic, as such. But any issues with the evidence are to be taken into account by
prior consideration of values in selection of method and characterization of data, not the
context of certification.
BEYOND UNDERDETERMINATION AND INDUCTIVE RISK
837
a testable hypothesis ðAnderson 2010Þ. The evidence by which we test value
judgments may include the emotional experiences that follow on adopting
those values ðAnderson 2004Þ.
If value judgments are really judgments—adopted for good reasons, subject to certain sorts of tests—then it is unreasonable to treat them according to
the lexical priority of evidence. Just as the good ðpartly empiricalÞ reasons
for adopting a theory, hypothesis, or background assumption can sometimes
give us good reasons to reinterpret, reject, or even ignore evidence apparently
in conflict with them, so too with a good value judgment. In treating values
as having qualitatively lower epistemic status than evidence, lexical priority
shows itself to be an unreasonable presumption. If evidence and values pull in
opposite directions on a hypothesis, then we should not always be forced to
follow the ðputativeÞ evidence.
7. Avoiding Wishful Thinking without Priority. If we reject the lexical
priority assumption and adopt a more egalitarian model of testing, we need
to adopt an alternative approach that can avoid the problem of wishful
thinking. An alternative principle to lexical priority is the joint necessity of
evidence and values, which requires joint satisfaction of epistemic criteria
and social values. This is the approach taken by Kourany ð2010Þ. On such a
view, the insertion of values, far from detracting from the rigor and objectivity of science, requires more rigorous standards of inquiry. Neither evidence nor values takes priority on a joint necessity account, but this principle leaves open the question of what to do when evidence and values clash.
One option is to remain dogmatic about both epistemic criteria and social
values and to regard any solution that flouts either as a failure, which appears
to be Kourany’s approach ðBrown 2013; but see also Kourany 2013Þ.
Alternatively, we can adopt the rational revisability of evidence and values in addition to joint necessity and revisit and refine our evidence or values. On this principle, both the production of evidence and value formation
are recognized as rational but fallible, revisable processes. Such views include
the radical version of Quinean holism that inserts values into the web of belief ðNelson 1990Þ. The adoption of these two principles alone does not
prevent wishful thinking, but adding some basic restrictions like minimal
mutilation may overcome the problem ðcf. Kitcher 2011Þ.
Instead of Quinean holism, we might instead adopt a form of pragmatist
functionalism about inquiry ðBrown 2012Þ, which differentiates the functional roles of evidence, theory, and values in inquiry. This retains the idea
that all three have to be coordinated and that each is revisable in the face of
new experience, while introducing further structure into their interactions.
According to such an account, not only must evidence, theory, and values
fit together in their functional roles, but they must do so in a way that actually resolves the problem that spurred the inquiry.
838
MATTHEW J. BROWN
8. Conclusion. The lexical priority of evidence over values is an unreasonable commitment and unnecessary for its goal of avoiding wishful thinking.
The key to the problem of wishful thinking is that we not predetermine the
conclusion of inquiry, that we leave ourselves open to surprise. The real
problem is not the insertion of values but dogmatism about values ðAnderson
2004Þ. Rather than being the best way to avoid dogmatism, the lexical priority of evidence over values coheres best with a dogmatic picture of value
judgments and so encourages the illegitimate use of values. A better account
is one in which values and evidence are treated as mutually necessary, functionally differentiated, and rationally revisable components of certification.
Such an account would allow that evidence may be rejected because of lack
of fit with a favored hypothesis and compelling value judgments, but only
so long as one is still able to effectively solve the problem of inquiry.
REFERENCES
Anderson, Elizabeth. 2004. “Uses of Value Judgments in Science: A General Argument, with
Lessons from a Case Study of Feminist Research on Divorce.” Hypatia 19 ð1Þ: 1–24.
———. 2010. “Dewey’s Moral Philosophy.” In Stanford Encyclopedia of Philosophy, ed. Edward
N. Zalta. Stanford, CA: Stanford University.
Biddle, Justin. 2013. “State of the Field: Transient Underdetermination and Values in Science.”
Studies in History and Philosophy of Science A 44 ð1Þ: 124 –33.
Brown, Matthew J. 2012. “John Dewey’s Logic of Science.” HOPOS: Journal of the International
Society for the History of Philosophy of Science 2 ð2Þ: 258–306.
———. 2013. “The Source and Status of Values in Socially Responsible Science.” Philosophical
Studies 163:67–76. doi:10.1007/s11098-012-0070-x.
Dewey, John. 1915. “The Logic of Judgments of Practice.” In The Middle Works, 1899–1924,
vol. 8, ed. J. A. Boydston. Carbondale: Southern Illinois University Press.
———. 1939. “Theory of Valuation.” In The Later Works, 1925–1953, vol. 13, ed. J. A. Boydston.
Carbondale: Southern Illinois University Press.
Douglas, Heather. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67 ð4Þ:
559–79.
———. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.
Elliott, Kevin C. 2011. Is a Little Pollution Good for You? Incorporating Societal Values in
Environmental Research. Environmental Ethics and Science Policy Series. New York: Oxford
University Press.
Hempel, Carl G. 1965. “Science and Human Values.” In Aspects of Scientific Explanation and
Other Essays in the Philosophy of Science, 81–96. New York: Free Press.
Howard, Don A. 2006. “Lost Wanderers in the Forest of Knowledge: Some Thoughts on the
Discovery-Justification Distinction.” In Revisiting Discovery and Justification: Historical and
Philosophical Perspectives on the Context Distinction, ed. Jutta Schickore and Friedrich
Steinle, 3–22. Dordrecht: Springer.
Intemann, Kristen. 2005. “Feminism, Underdetermination, and Values in Science.” Philosophy of
Science 72 ð5Þ: 1001–12.
James, William. 1896. “The Will to Believe.” New World 5:327– 47.
Kitcher, Philip. 2001. Science, Truth, and Democracy. Oxford: Oxford University Press.
———. 2011. Science in a Democratic Society. Amherst: Prometheus.
Kourany, Janet A. 2003. “A Philosophy of Science for the Twenty-First Century.” Philosophy of
Science 70 ð1Þ: 1–14.
———. 2010. Philosophy of Science after Feminism. Oxford: Oxford University Press.
BEYOND UNDERDETERMINATION AND INDUCTIVE RISK
839
———. 2013. “Meeting the Challenges to Socially Responsible Science: Reply to Brown, Lacey,
and Potter.” Philosophical Studies 163 ð1Þ: 93–103. doi:10.1007/s11098-012-0073-7.
Kuhn, Thomas S. 1977. Objectivity, Value Judgment, and Theory Choice. Chicago: University of
Chicago Press.
Longino, Helen E. 1990. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry.
Princeton, NJ: Princeton University Press.
———. 2002. The Fate of Knowledge. Princeton, NJ: Princeton University Press.
———. 2004. “How Values Can Be Good for Science.” In Science, Values, and Objectivity, ed.
Peter Machamer and Gereon Wolters, 127–42. Pittsburgh: University of Pittsburgh Press.
———. 2008. “Values, Heuristics, and the Politics of Knowledge.” In The Challenge of the Social
and the Pressure of Practice: Science and Values Revisited, ed. Martin Carrier, Don Howard,
and Janet A. Kourany, 68–86. Pittsburgh: University of Pittsburgh Press.
Magnus, P. D. 2003. “Underdetermination and the Claims of Science.” PhD diss., University of
California, San Diego.
———. 2013. “What Scientists Know Is Not a Function of What Scientists Know.” Philosophy of
Science, in this issue.
McMullin, Ernan. 1983. “Values in Science.” In PSA 1982: Proceedings of the Biennial Meeting of
the Philosophy of Science Association, ed. Peter D. Asquith and Thomas Nickles, 3–28. East
Lansing, MI: Philosophy of Science Association.
Nelson, Lynn Hankinson. 1990. Who Knows: From Quine to a Feminist Empiricism. Philadelphia:
Temple University Press.
Norton, John D. 2008. “Must Evidence Underdetermine Theory?” In The Challenge of the Social
and the Pressure of Practice: Science and Values Revisited, ed. Martin Carrier, Don Howard,
and Janet A. Kourany, 17–44. Pittsburgh: University of Pittsburgh Press.
Rudner, Richard. 1953. “The Scientist qua Scientist Makes Value Judgments.” Philosophy of
Science 20 ð1Þ: 1–6.
Rutte, Heiner. 1991. “The Philosopher Otto Neurath.” In Rediscovering the Forgotten Vienna Circle: Austrian Studies on Otto Neurath and the Vienna Circle, ed. Thomas Ernst Uebel, 81–94.
Dordrecht: Kluwer.
Stanford, Kyle. 2009. “Underdetermination of Scientific Theory.” In Stanford Encyclopedia of
Philosophy, ed. Edward N. Zalta. Stanford, CA: Stanford University.
Toulmin, Stephen. 1958. The Uses of Argument. Cambridge: Cambridge University Press.
Welchman, Jennifer. 2002. “Logic and Judgments of Practice.” In Dewey’s Logical Theory: New
Studies and Interpretations, ed. F. Thomas Burke, D. Micah Hester, and Robert B. Talisse.
Nashville: Vanderbilt University Press.