Two Dogmas of Empiricism by W.V Quine, Explained
Two Dogmas of Empiricism by W.V Quine, Explained
Two Dogmas of Empiricism by W.V Quine, Explained
O Quine
Introduction
Quine asserts that modern empiricism has been influenced by two main assumptions or
"dogmas." The first is the belief in a fundamental difference between analytic truths (true
by virtue of meanings) and synthetic truths (true by virtue of facts). The second dogma is
reductionism, the idea that every meaningful statement can be reduced to a logical construct
based on immediate experience. Quine aims to show that both of these dogmas are flawed.
Abandoning them blurs the line between metaphysics and natural science and shifts the focus
toward pragmatism.
I. Background for Analyticity
Quine critiques the idea that all analytic statements of the second class (e.g., "No
bachelor is married") can be reduced to logical truths (statements true by virtue of their
logical form, like "No unmarried man is married") through definitions. He questions how we
know that terms like "bachelor" are defined as "unmarried man." He argues that definitions
recorded in dictionaries reflect common usage rather than establish it, meaning
lexicographers observe and record how words are used rather than create these
relationships.
Definitions in dictionaries are essentially empirical observations about language usage,
not prescriptive rules. If a lexicographer defines "bachelor" as "unmarried man," it’s
because this reflects common usage, not because they have the authority to dictate meaning.
Therefore, definitions are reports on how words are used rather than the origin of their
meaning or synonymy.
Quine acknowledges that philosophers and scientists often define terms to make
complex ideas more understandable. However, these definitions usually describe pre-existing
relationships of synonymy rather than create them. They clarify how terms are used in
particular contexts.
Determining when two linguistic forms are synonymous is complicated and usually
grounded in how they are used in language. Definitions that reflect synonymy are reports on
usage rather than the origin of the synonymy. Thus, definitions tell us how words are used but
don't explain why or how they come to mean what they do.
Quine introduces the concept of explication, as proposed by Carnap, which goes beyond
mere reporting of synonymy. Explication involves refining or improving the meaning of a term
while maintaining its use in clear and precise contexts. This activity doesn’t merely report
existing synonymies but creates clearer or more useful terms based on existing usage.
Explications aim to preserve the usage of a term in contexts where it is already clear
and precise while enhancing its usage in other contexts. Thus, even if the new definition is not
strictly synonymous with the old, it serves to clarify and improve upon the original term.
Quine acknowledges a type of definition that doesn’t rely on pre-existing synonymy:
the creation of new notations for the sake of abbreviation. In such cases, a new term is
introduced explicitly to be synonymous with a more complex expression. This is the clearest
form of synonymy created by definition.
Definitions in formal logic and mathematics often have a reassuring connotation
because they seem precise and clear. However, their role in these fields is often
misunderstood.
In logical and mathematical systems, there are two main goals: economy of expression
and economy in grammar and vocabulary. The first goal focuses on ease and brevity of
communication, requiring concise notations for many concepts. The second goal focuses on
reducing the number of basic concepts to simplify the theoretical structure of the language,
even if this makes individual statements longer.
Despite their seeming incompatibility, both forms of economy are valuable. The
custom has arisen to use two interconnected languages: a broader one that is redundant but
concise in message lengths, and a narrower, primitive one that is minimal in vocabulary and
grammar but longer in individual statements.
Definitions in formal systems are translations between these two languages, showing
how complex concepts can be expressed using the minimal vocabulary of the primitive
notation. These definitions should be seen as links between two languages rather than mere
adjuncts to one.
These correlations between the broader language and the primitive notation aim to
demonstrate that the primitive notations can express all necessary concepts except for
brevity and convenience. Depending on the task, definitions may either paraphrase existing
terms, improve upon them, or create new notations.
Quine concludes that, except for the clear case of new notations created for specific
purposes, definitions largely rely on pre-existing synonymies. Therefore, the notion of
definition alone cannot explain synonymy and analyticity. To understand these concepts
better, we need to delve deeper into the nature of synonymy itself, beyond mere definitions.
III. Interchangeability
Quine explores the idea that synonymy (the sameness of meaning between two
expressions) might be defined by their interchangeability in all contexts without changing the
truth value of statements in which they appear. This concept is known as "interchangeability
salva veritate" (interchangeability preserving truth). He acknowledges that synonyms might
be vague but should match in their vagueness.
Quine notes that the terms "bachelor" and "unmarried man" are not always
interchangeable without changing the truth value. For instance, "bachelor of arts" or
"bachelor's buttons" have specific meanings that don't correspond to "unmarried man."
Additionally, within quotes, like in "'bachelor' has less than ten letters," substitution fails.
These counterexamples show that straightforward interchangeability isn’t sufficient for
defining synonymy.
Quine suggests that by treating phrases like "bachelor of arts" or quoted words as
single, indivisible units, we might salvage the interchangeability criterion. However, this
requires a clear definition of what constitutes a "word," which itself is a challenging problem.
Nonetheless, addressing synonymy might become simpler if it can be reduced to a problem of
defining "wordhood.”
The next question is whether interchangeability preserving truth value, excluding
fragmentary occurrences within words, is strong enough to define cognitive synonymy (the
kind of synonymy relevant to understanding meaning and analyticity). Cognitive synonymy is
not about complete identity in associations or poetic quality but about the meaning necessary
for turning analytic statements into logical truths by substituting synonyms.
Quine revisits the notion of cognitive synonymy from an earlier section, where an
analytic statement could be converted into a logical truth by substituting synonyms. He
proposes that "bachelor" and "unmarried man" are cognitively synonymous if the statement
"All and only bachelors are unmarried men" is analytic.
For Quine, to explain analyticity using cognitive synonymy, one must account for
cognitive synonymy without presupposing analyticity. He considers interchangeability
preserving truth value, except within words, as a candidate for this account. The question is
whether this interchangeability is sufficient for cognitive synonymy.
Quine argues that if "bachelor" and "unmarried man" are truly interchangeable in
all contexts without changing truth value, then the statement "Necessarily, all and only
bachelors are bachelors" implies "Necessarily, all and only bachelors are unmarried men." If
the latter is true, then the original statement is analytic, proving cognitive synonymy.
Quine acknowledges that this argument presupposes a language rich enough to
include modal operators like "necessarily." If such language assumes the notion of analyticity,
the argument seems circular or at least self-referential, making synonymy depend on an
understanding of analyticity that we are trying to define.
He suggests that interchangeability preserving truth value must be relative to a
specific language, particularly one that does not include problematic terms like
"necessarily." In an extensional language (where predicates true of the same objects are
interchangeable), interchangeability doesn't ensure cognitive synonymy, as it only reflects
factual agreement, not meaning.
In an extensional language, "bachelor" and "unmarried man" being interchangeable
doesn’t confirm cognitive synonymy, as this could be accidental (like "creature with a heart"
and "creature with a kidney"). Extensional agreement is a practical approximation but
insufficient for explaining analyticity.
Quine concludes that interchangeability in an extensional language doesn’t suffice
for cognitive synonymy needed to derive analyticity. If the language includes modal terms like
"necessarily," interchangeability might indicate cognitive synonymy, but this presupposes a
clear notion of analyticity.
Quine suggests reversing the approach: instead of defining cognitive synonymy to
derive analyticity, we should try explaining analyticity without relying on cognitive synonymy.
Once analyticity is understood, cognitive synonymy can be derived from it. For instance, the
analyticity of "All and only bachelors are unmarried men" implies the cognitive synonymy of
"bachelor" and "unmarried man.”
Quine extends this idea to all predicates and other syntactical categories. Singular
terms are cognitively synonymous if their identity statements are analytic, and statements
are synonymous if their biconditional forms are analytic. He proposes a generalized approach,
where two linguistic forms are cognitively synonymous if they are interchangeable, preserving
analyticity, excluding ambiguous or homonymous cases. He decides to focus directly on
analyticity rather than synonymy.
Quine begins by reflecting on how analyticity seemed most naturally defined through
meanings. However, refining this idea shifted the focus from meanings to synonymy or
definition. Yet, defining synonymy depended on analyticity itself, leading to a circular
problem. Thus, we still face the issue of clearly defining analyticity.
Quine questions whether his uncertainty about whether "Everything green is
extended" is analytic stems from not fully understanding the meanings of "green" and
"extended." He argues that the problem lies not with the terms themselves but with the
concept of "analytic.”
He notes that some suggest the difficulty in distinguishing analytic from synthetic
statements in ordinary language is due to its vagueness. They claim that this distinction
becomes clear in a precise artificial language with explicit semantical rules. Quine disagrees
and aims to demonstrate that this is a misconception.
Quine explains that analyticity concerns the relation between statements and
languages: a statement 𝑆 is analytic for a language 𝐿 . The challenge is to clarify this
relation generally, even for artificial languages. He argues that the problem persists for
artificial languages just as it does for natural languages.
He refers to Carnap's work on semantical rules for artificial languages. If the
semantical rules specify which statements are analytic, these rules themselves would need to
use the word "analytic." This presents a problem because we don’t understand "analytic" well
enough to understand such rules.
Alternatively, these rules might define a new term like "analytic-for- 𝐿 " which
doesn’t clarify the concept of "analytic." Quine emphasizes that simply stating which
statements are analytic in a language does not explain what it means for a statement to be
analytic.
Quine suggests that even if we know which statements are analytic in a specific
language
𝐿 , we still don’t understand what makes them analytic. This issue remains unsolved even
when we limit 𝐿 to artificial languages. The term "semantical rule of" needs as much
clarification as "analytic for.”
One might argue that an artificial language 𝐿 is a language plus a set of explicit
semantical rules, forming an ordered pair. However, Quine suggests this just moves the
problem around without solving it, as we could alternatively define 𝐿 directly by its analytic
statements. This approach does not solve the deeper issue.
Quine acknowledges that he hasn’t covered all forms of Carnap’s explanations of
analyticity but indicates the general principle applies. Sometimes semantical rules translate
into ordinary language, identifying analytic statements by their ordinary language
counterparts, which doesn’t solve the problem either.
He concludes that artificial languages with semantical rules don’t help clarify
analyticity unless we already understand analyticity. These rules are only useful if we
already grasp the concept of analyticity, hence they don’t aid in defining or understanding it.
Hypothetical languages might help clarify analyticity if they incorporate relevant
mental, behavioral, or cultural factors. However, merely assuming analyticity as an
irreducible character in a model won’t illuminate the concept.
Quine highlights that truth depends on both language and extralinguistic facts. The
statement "Brutus killed Caesar" would be false if either historical facts or the meanings of
words were different. This suggests that truth comprises both a linguistic and a factual
component.
He argues that some believe in purely linguistic truths with no factual component,
the analytic statements. However, Quine asserts that no clear boundary between analytic
and synthetic statements has been established. He critiques the belief in such a distinction as
an unfounded dogma, even among empiricists.
Quine has critiqued the concepts of meaning, cognitive synonymy, and analyticity.
Now, he turns to the verification theory of meaning to see if it can address these issues. This
theory, popular among empiricists, proposes that the meaning of a statement is tied to how it
can be empirically verified.
The verification theory of meaning asserts that a statement’s meaning is the method
by which it can be confirmed or refuted through experience. An analytic statement is a
special case that is true regardless of any empirical verification, because it is true by virtue
of its meaning alone.
Instead of focusing on meanings as entities, Quine suggests focusing on sameness of
meaning or synonymy. According to the verification theory, statements are synonymous if
they share the same method of empirical confirmation or infirmation.
This approach defines cognitive synonymy for statements, not for linguistic forms in
general. However, once we have synonymy for statements, we can extend it to other linguistic
forms by determining if substituting one form for another in any statement results in a
synonymous statement. Thus, analyticity can be defined in terms of the synonymy of
statements and logical truth.
If the verification theory adequately explains statement synonymy, analyticity is
preserved. But Quine questions what constitutes the empirical methods for confirming or
infirming statements and the nature of the relationship between a statement and the
experiences that confirm or disconfirm it.
A simplistic view is that this relationship is one of direct report, known as radical
reductionism. This view holds that every meaningful statement can be translated into
statements about immediate experiences. This idea predates the verification theory and was
suggested by philosophers like Locke and Hume, who argued that ideas originate from sense
experiences or are combinations of such ideas.
Quine explains that this reductionism is restrictive if applied term-by-term. Instead,
he proposes taking full statements as the units of meaning, requiring statements to be
translatable into sense-datum language as wholes, not by individual terms.
Locke, Hume, and Tooke would likely have supported this view, but it required
further development. One such development was the verification theory of meaning, which
shifted focus from words to statements. Another was Russell’s concept of incomplete symbols,
which emphasized the significance of statements rather than individual terms.
In radical reductionism with statements as units, the goal is to specify a sense-datum
language and translate significant discourse into it statement by statement. Carnap
attempted this in his book "The Logical Structure of the World" (Aufbau), though his starting
point included logic and mathematics, not just sensory data.
Carnap’s approach used a minimal sensory basis and defined additional sensory
concepts using logic. While innovative, this only partly fulfilled the reductionist program, as it
left many simple physical statements inadequately translated.
Carnap's method of assigning truth values to statements about physical objects was
sophisticated, but it did not provide a way to translate such statements into the sense-datum
language. The relationship between qualities and point-instants remained undefined, meaning
the connective "is at" could not be eliminated.
Carnap later abandoned the idea of translating physical world statements into
immediate experience statements, moving away from radical reductionism.
Despite this shift, the notion persisted among empiricists that each statement could
be individually confirmed or disconfirmed by sensory events. This notion underlies the
verification theory of meaning.
Quine argues that this dogma of reductionism is flawed. He suggests, instead, that
our statements about the world are tested as a whole, not individually. This idea stems from
Carnap’s work and implies that science is confirmed or disconfirmed collectively.
The dogma of reductionism is closely related to the distinction between analytic and
synthetic statements. If statements can be confirmed or disconfirmed individually, it
supports the idea of analytic statements that are true regardless of empirical evidence.
Quine asserts that the distinction between analytic and synthetic statements, and the
idea of confirming statements individually, is misguided. He suggests that the truth of
statements depends on both language and extralinguistic facts, but this duality applies to
science as a whole, not to individual statements.
Quine concludes that taking statements as units of meaning, as Russell proposed, was
a step forward. However, he argues that even statements are too fine-grained, and we
should consider the whole of science as the unit of empirical significance.
Quine begins by describing our entire system of knowledge and beliefs as a man-
made fabric. This includes everything from mundane facts to the most abstract scientific
laws. He likens this system to a field of force where experiences form the boundary
conditions. When experiences contradict the system, adjustments are made within the system
to restore consistency. These adjustments affect not only the statements directly involved
but also those logically connected to them.
Quine argues that because our system of knowledge is only loosely determined by
experience, there is flexibility in how we adjust it in response to new experiences. No specific
experience corresponds directly to a particular statement except through complex
interrelations within the entire system. Thus, it is misleading to talk about the empirical
content of an individual statement, especially those far removed from direct experience.
Consequently, distinguishing between synthetic and analytic statements becomes meaningless,
as any statement can be maintained by adjusting others elsewhere in the system.
Quine elaborates that we could, theoretically, maintain any statement as true
despite contrary evidence by making significant changes elsewhere in our system of
knowledge. Even statements close to direct sensory experience can be upheld by claiming
hallucination or altering logical laws. No statement is immune to revision. This notion is
exemplified by historical shifts in scientific paradigms, such as moving from Newtonian to
Einsteinian physics, which show that even fundamental principles can be revised.
For clarity, Quine explains the idea of statements' proximity to sensory experiences
without using metaphor. Some statements, though about physical objects, are more directly
linked to sense experiences and are thus nearer to the experiential periphery. These
statements are more likely to be revised in light of new experiences. This pragmatic approach
suggests that our inclination to disturb the system minimally leads us to adjust statements
closely tied to specific experiences rather than more abstract theoretical statements.
Quine views the conceptual scheme of science as a tool for predicting future
experiences based on past ones. Physical objects are not defined strictly in terms of
experience but are posited as convenient intermediaries, similar to mythological entities.
Quine personally believes in physical objects, unlike mythological gods, because they provide a
more effective means of organizing experience.
Quine draws an analogy with irrational numbers in mathematics. Just as irrational
numbers simplify the treatment of rational numbers despite being conceptual additions,
physical objects simplify our understanding of experience. The analogy holds even though
irrational numbers eventually were given a rigorous foundation, illustrating that posits serve
to facilitate theory rather than directly correspond to experience.
Quine extends the analogy by suggesting that physical objects, like irrational
numbers, are posits that simplify our engagement with experience. They are not reducible to
sensory experiences, but their inclusion in our theoretical framework allows for more
efficient transitions between statements about experience.
The differences between positing physical objects and irrational numbers are
twofold. First, the simplification provided by physical objects is much more significant. Second,
positing physical objects is a much older practice, likely concurrent with the development of
language itself, which relies on intersubjective reference.
Quine continues that science extends the practice of common sense by positing
microscopic and subatomic objects to simplify laws concerning macroscopic objects and
experiences. These entities do not need to be fully defined in terms of macroscopic ones, just
as macroscopic objects are not defined strictly by sensory data. This ongoing expansion of
ontology helps to manage and simplify our understanding of the world.
Quine adds that forces and abstract mathematical entities are also posits,
functioning similarly to physical objects and gods. They are myths that help us deal with
sensory experiences more effectively. The difference between these myths lies in their utility
for organizing experience.
Quine concludes that the entirety of science, with its intricate myths and fictions,
aims to simplify laws and maintain consistency with experience. Ontological questions, like
whether to accept classes as entities, are akin to questions of natural science. They are
about choosing convenient frameworks rather than uncovering ultimate truths.
Quine agrees with Carnap that selecting a conceptual framework is about
convenience but contends that this applies to all scientific hypotheses, not just ontological
questions. Carnap's distinction between analytic and synthetic propositions underpins his
double standard, a distinction Quine rejects.
Quine acknowledges that some issues seem more about choosing a conceptual
scheme, while others seem like matters of fact. However, he argues that this difference is
one of degree. It depends on pragmatic considerations, such as our inclination to make
minimal adjustments and seek simplicity in accommodating new experiences.
Quine critiques the pragmatism of Carnap, Lewis, and others, which stops at the
boundary between the analytic and synthetic. By rejecting this boundary, Quine advocates a
more thorough pragmatism. He believes that the adjustments made to scientific heritage in
response to sensory experiences are guided by pragmatic considerations, ensuring that
science remains a useful tool for navigating the world.