RepresentingMeaning SpeedVinsonVigliocco
RepresentingMeaning SpeedVinsonVigliocco
RepresentingMeaning SpeedVinsonVigliocco
net/publication/290937612
Representing meaning
CITATIONS READS
6 7,925
3 authors, including:
All content following this page was uploaded by Laura Speed on 18 January 2016.
Stefanowitsch, Anatol
2011 Cognitive linguistics as cognitive science. In: M. Callies, W. Keller, and A. Lohofer
(eds.), Bi-directionality in the Cognitive Sciences, 295−310. Amsterdam: Benjamins.
Svanlund, Jan
2007 Metaphor and convention. Cognitive Linguistics 18: 47−89.
Sweetser, Eve
1990 From Etymology to Pragmatics: Metaphorical and Cultural Aspects of Semantic Struc-
ture. New York: Cambridge University Press.
Tendahl, Markus
2008 A Hybrid Theory of Metaphor. Basingstoke: Palgrave
Thibodeau Paul and Lera Boroditsky
2011 Metaphors we think with: The role of metaphor in reasoning. PLoS ONE, 6 (2): e16782
Thibodeau, Paul and Frank Durgin
2008 Productive figurative communication: Conventional metaphors facilitate the comprehen-
sion of related novel metaphors. Journal of Memory and Language 58: 521−540.
Turner, Mark
this volume 10. Blending in language and communication. Berlin/Boston: De Gruyter Mou-
ton.3
Vervaeke, John and John Kennedy
1996 Metaphors in language and thought: Disproof and multiple meanings. Metaphor and
Symbolic Activity 11: 273−284.
Williams, Lawrence and John Bargh
2008 Experiencing physical warm influences interpersonal warmth. Science 322: 606−607.
Wilson, Nicole and Raymond Gibbs
2007 Real and imagined body movement primes metaphor comprehension. Cognitive Science
31: 721−731.
Zhong, Chen-Bo and Katie Liljenquist
2006 Washing away your sins: Threatened morality and physical cleansing. Science 313:
1451−1452.
9. Representing Meaning
1. Introduction
2. Key issues in semantic representation
3. Theoretical perspectives
4. An integrated proposal: combining language-based and experiential information
5. Conclusion
6. References
3
███
190 I. The Cognitive foundations of language
1. Introduction
Understanding the meaning of words is crucial to our ability to communicate. To do so
we must reliably map the arbitrary form of a spoken, written or signed word to the
corresponding concept whether it is present in the environment, tangible or merely imag-
ined (Meteyard et al. 2012: 2). In this chapter we review two current approaches to
understanding word meaning from a psychological perspective: embodied and distribu-
tional theories. Embodied theories propose that understanding words’ meanings requires
mental simulation of entities being referred to (e.g., Barsalou 1999; see also Bergen this
volume) using the same modality-specific systems involved in perceiving and acting
upon such entities in the world. Distributional theories on the other hand typically de-
scribe meaning in terms of language use: something arising from statistical patterns that
exist amongst words in a language. Instead of focusing on bodily experience, distribu-
tional theories focus upon linguistic data, using statistical techniques to describe words’
meanings in terms of distributions across different linguistic contexts (e.g., Landauer
and Dumais 1997; Griffiths et al. 2007). These two general approaches are traditionally
used in opposition, although this does not need to be the case (Andrews et al. 2009) and
in fact by integrating them we may have better semantic models (Vigliocco et al. 2009).
We will highlight some key issues in lexical representation and processing and de-
scribe historical predecessors for embodied theories (i.e., featural approaches) and distri-
butional theories (i.e., holistic approaches). We conclude by proposing an integrated
model of meaning where embodied and linguistic information are both considered vital
to the representation of words’ meanings.
2.1. Are words from different domains represented in the same way?
The vast majority of research investigating semantic representation has focused on con-
crete nouns. The past decade has seen increasing research into the representation of
action verbs and a beginning of interest in the study of how abstract meaning is represen-
ted. A critical question is whether the same overarching principles can be used across
these domains, or whether organisational principles must differ.
A fundamental difference between objects and actions is that objects can be thought
of in isolation, as discrete entities, but actions are more complex, describing relations
among multiple participants (Vigliocco et al. 2011). Connected to this are temporal dif-
ferences: actions tend to be dynamic events with a particular duration while objects are
stable with long-term states.
Because of the stable nature of objects, nouns’ meanings tend to be relatively fixed.
Verbs’ meanings are less constrained and often more polysemous. These differences
could underscore different representational principles for object-nouns and action-verbs,
9. Representing Meaning 191
but do not preclude a semantic system in which objects and actions are represented in
the same manner and differences in organisation arise from differences in representation-
al content. Such an example is described by Vigliocco et al’s (2004) FUSS model, in
which representations for action and object words are modelled in the same lexico-
semantic space, using the same principles and tools. Differences emerge from differences
in the featural properties of the two domains rather than different principles of organisa-
tion.
When comparing concrete and abstract words, there is a stronger case for assuming
different content and different organisational principles. It is well established that pro-
cessing abstract words takes longer than processing concrete words (the “concreteness
effect”) for which Paivio’s dual-coding theory provides a long-standing account (e.g.,
Paivio 1986). Under this view two separate systems contribute to word meaning: a word-
based system and an image-based system. Whereas concrete words use both systems
(with greater reliance on the latter), abstract words rely solely on word-based informa-
tion. The concreteness effect would occur because concrete words use two systems in-
stead of one, thus having richer and qualitatively different semantic representations than
abstract words.
An alternative view, the context availability theory (Schwanenflugel and Shoben
1983), does not require multiple representational systems to account for the concreteness
effect. Under this view, advantages for concrete words come from differences in associa-
tions between words and previous knowledge (i.e., differences in the number of links,
rather than in content/organisation), with abstract concepts being associated with much
less context. Here the concreteness effect results from the availability of sufficient con-
text for processing concrete concepts in most language situations, but deficient context
for processing abstract words (Schwanenflugel and Shoben 1983).
More recent proposals for the differences between concrete and abstract concepts
and words include viewing abstract knowledge as arising out of metaphorical extension
(Boroditsky 2000; Lakoff and Johnson 1980; Bergen this volume; Gibbs this volume),
or differences in featural properties rather than different principles of organisation for
abstract and concrete meaning: sensorimotor information underlying concrete meanings
and affective and linguistic information underlying abstract meanings (Kousta et al.
2011).
To summarise, theories of semantic representation make different assumptions about
semantic representations for different domains of knowledge, varying from a single,
unitary semantic system to a much more fractionated system, where different principles
of organisation are specified for different word types. However, there exists no strong
evidence for assuming different principles, and following the argument of parsimony,
we argue for a unitary system based on the same principles across domains. Instead of
different organisational principles, differences across domains come about due to differ-
ences in content, namely differences in the extent to which a given type of content is
most important for a given domain: sensory-motor information for the concrete domain
and emotion and linguistic information for the abstract domain (Vigliocco et al. 2009).
our conceptual knowledge (the knowledge we use to categorise and understand the
world) and the language we use. Since we begin life exploring and learning about our
world, with language developing later, conceptual knowledge ultimately must develop
before language. One important issue then is how words relate to conceptual knowledge.
Should we think of word meanings and concepts interchangeably? This relationship has
many important implications, for example, the extent of translation equivalency across
languages.
One argument for treating words and concepts interchangeably is that many robust phe-
nomena have been found to affect them both. If the same factors affect both and they behave
similarly, then they must be closely linked, if not interchangeable. For example, feature
type, feature correlations and distinguishing features have been shown to explain category-
specific deficits in categorization of concepts (e.g., McRae and Cree 2002) and semantic
priming effects for words (McRae and Boisvert 1998). Because characteristics of conceptu-
al features seem to have comparable effects it would be parsimonious to consider conceptu-
al representations the same as word meaning (consistent with Langacker 1982).
There are reasons, however to suggest that there is not a one-to-one mapping between
the two. First, we possess far more concepts than words. There are often actions or
situations that we know well and understand that are not lexicalized such as “the actions
of two people manoeuvring for one armrest in a movie theatre or airplane seat” (Hall
1984 discussed in Murphy 2002). Further, one word can be used to refer to multiple
meanings (e.g., polysemy) and so refers to a set of concepts instead of a single concept
(see Gries this volume). This matter is further complicated when we look at cross-
linguistic differences in links between conceptual knowledge and linguistic representa-
tions (see Vigliocco and Filipović 2004).
There are many examples of cross-linguistic differences in semantic representations
that do not have any obvious explanations. For instance, although both English speakers
and Italian speakers use different words to denote foot and leg, Japanese speakers use
the same word ashi to describe both. One could hardly argue that conceptually, Japanese
speakers do not know the difference between one’s foot and one’s leg. If linguistic
categories are based on one-to-one mappings with conceptual structure, then cross-lin-
guistic differences have clear implications for the assumption of universality of conceptu-
al structure.
With the above issues in mind, below we present the two main perspectives on seman-
tic representation, guided by the ideas that the same organising principles apply across
word types and that meaning is distinct from but strongly linked to conceptual know-
ledge (e.g., Vigliocco and Filipović, 2004).
3. Theoretical perspectives
The main theoretical approaches to word meaning can be clustered into those that consid-
er our sensorimotor experience as the building blocks of semantic representation and
those that instead consider statistical patterns in language as the building blocks. This
great divide corresponds to disciplinary boundaries between cognitive psychology and
neuroscience on one side and computational linguistics and computer science on the
other side. Within linguistics, both perspectives are represented as reflecting the distinc-
tion between sense and reference since Frege ([1892] 1952).
9. Representing Meaning 193
3.1. Embodiment
Fig. 9.1: Amodal vs. perceptual symbols. Taken from Barsalou, L. W., Simmons, W. K., Barbey,
A., and Wilson, C. D. (2003). Grounding conceptual knowledge in modality-specific
systems. Trends in Cognitive Sciences, 7, 84–91. (a) In amodal symbol systems neural
representations from vision are transduced in an amodal representation such as a frame,
semantic network or feature list. These amodal representations are used during word
understanding. (b) In perceptual symbol systems neural representations from vision are
partially captured by conjunctive neurons, which are later activated during word compre-
hension to re-enact the earlier state.
with less sensory associations (Vinson and Vigliocco 2002). In this sense, verbs could
be considered to be more abstract than nouns (Bird et al. 2003). These differences have
been invoked to account for patients with selective deficits in retrieving and producing
nouns and those who had more problems with verbs (see Vigliocco et al. 2011). It is
questionable whether these theories can be extended to account for differences between
concrete and abstract words. However, a recently published collection of feature norms
found that participants can generate features for abstract words with general agreement
across subjects that could not be explained simply by associations (Buchanan et al.
2012).
Featural theories usually focus on concepts, not words (although concepts and words
are often implicitly or explicitly assumed as the same). There are theories, however,
that assume a separate semantic level where features are bound into a lexico-semantic
representation (Vigliocco et al. 2004), and others that hypothesize “convergence zones”
in the brain where information from multiple modalities is integrated (Damasio 1989,
Simmons and Barsalou 2003; see Vigliocco et al. 2012).
Embodiment theories build upon these earlier accounts, as research that supports
featural representations is necessarily compatible with embodied views. For example,
semantic priming based on overlapping features (McRae and Boisvert 1998) could be
explained by overlap in activation of the same sensorimotor area (e.g., Pecher et al.
2003).
9. Representing Meaning 195
mazza’s view is not completely disembodied, but rather falls along a continuum, as we
will describe in the next section.
Theories of embodiment vary in terms of how strongly they define the role of the sensori-
motor systems in semantic representation. Theories can be considered along a continuum
from strongly embodied (full simulation), through weak embodiment and secondary em-
bodiment, and then moving beyond embodiment to fully symbolic, disembodied theories
(Meteyard et al. 2012; see Figure 9.2). Distributional approaches could be placed on
the extreme, “disembodied” end of the continuum, assigning no role for sensory-motor
information. Theories supporting secondary embodiment still see semantics as amodal
and abstract but propose that semantic representation and sensory-motor information are
directly associated, for example, amodal representations derived from sensory-motor in-
put (Patterson et al. 2007). For weak embodiment, semantic representations are partly
instantiated by sensory-motor information which does have a representational role, but
some degree of abstraction still takes place. Areas adjacent to primary sensory-motor
areas are involved in semantic representation and are reciprocally linked to primary
areas. From a strong embodiment perspective, semantic processing necessarily activates
sensory-motor information and is completely dependent upon it. Here, semantic process-
ing takes place within primary sensory and motor areas and precisely the same systems
are used for semantic processing and sensory-motor processing.
A fully symbolic theory is problematic because there is no link between language
and world knowledge, which raises the grounding problem and the problem of referen-
tiality: how do we understand what words refer to if they are not linked to the world
(Harnad 1990)? Based on the research evidence for sensory-motor activations during
semantic processing (Meteyard et al. 2012), it is clear that sensory-motor systems play
some role in semantic processing. Strong embodiment also appears to be unsatisfactory:
some degree of abstraction must take place in order to extract and combine features into
the correct conceptual conjunctions. Based on evidence from TMS and lesion studies,
weak embodiment, where sensory-motor information plays an integral, representational
role in semantic representation whilst maintaining some degree of abstraction seems the
most plausible choice.
Since word meanings appear to produce similar activation patterns to their real-world
referents, different types of words will necessarily have different patterns of activation.
Differences in semantic representations of objects and actions have clearly been demon-
strated with neuropsychology (e.g., Damasio and Tranel 1993) and imaging data (e.g.,
Martin et al. 1995) (for review see Vigliocco et al. 2011). Here, it has generally been
found that processing object-nouns involves activation of posterior sensory cortices
while processing action-verbs involves activation of fronto-parietal motor areas.
9. Representing Meaning
Fig. 9.2: A continuum of embodiment. Adapted from Meteyard, L., Cuadrado, S. R., Bahrami, B. and Vigliocco, G. 2012, Coming of age:
A review of embodiment and the neuroscience of semantics. Cortex, 48(7), 788–804.
197
198 I. The Cognitive foundations of language
Traditionally it has been argued that embodied theories have problems explaining how
abstract concepts are represented. Abstract words pose a special problem to theories of
embodied semantics because their content is not strongly perceptual or motoric, and as
such, it is often argued that their meaning can only be represented in abstract proposition-
al forms (e.g., Noppeney and Price 2004).
There are now a number of alternative (or complementary) hypotheses on embodi-
ment of abstract concepts. One hypothesis is that the meaning of abstract words is under-
stood through metaphorical mappings (Boroditsky 2000; Lakoff and Johnson 1980; see
Gibbs this volume). For example one could conceptualize the mind as a container (Dove
2009) because it holds information. Metaphor allows abstract representations to be based
on extensions of more concrete experience-based concepts grounded in perception and
action. Boroditsky et al. (2001) showed how the abstract concept of time could be em-
bodied using mental representations of the more concrete domain of space (see Evans
this volume for greater discussion on the representation of time). The authors speculated
that the link between the two concepts developed via correspondences between space
and time in experience: moving in space correlates with time. Language then builds upon
these simple correspondences.
Although metaphors highlight similarities between concepts, they do not define the
differences (Dove, 2009): although the mind shares similarities with a container insofar
as it contains information, it is much more than this and this information is not captured
in the metaphor. Additionally, one can think of many aspects of abstract knowledge that
cannot be accounted for by metaphor (Meteyard et al. 2012), such as scientific technical
jargon (but see Glenberg 2011: 15). Although a role for metaphor could be acknowl-
edged, the question is whether metaphorical mappings could really be the foundation of
learning and representation of abstract concepts, or if they just provide structure for
existing concepts (Barsalou 1999).
The difference between concrete and abstract words may arise because of the number
and type of simulations for each word type, similar to differences in context (cf. the
context availability theory, Schwananflugel and Shoben 1983). Abstract words’ mean-
ings would be based on a wider range of simulations than concrete words, and tend to
focus more on social, introspective and affective information than perceptual and motor
(Barsalou and Wiemer-Hasting 2005; Kousta et al. 2011; Connell and Lynott 2012).
Differences arise between the two word types because the type of information and situa-
tions relevant for abstract meaning is more difficult to access.
Kousta et al. (2011) and Vigliocco et al. (2009) described differences between abstract
and concrete concepts as arising from the ecological statistical preponderance of sensory-
motor features in concrete concepts compared to the statistical preponderance of linguis-
tic and especially affective associations for abstract concepts. They argue that affect may
be a critical factor in the learning and representation of abstract knowledge because
abstract words tend to have emotional associations, and because emotional development
precedes language development in children (Bloom 1998). Abstract words with greater
affective associations are acquired earlier with the rate of acquisition rapidly increasing
around age three (Bretherton and Beeghly 1982; Wellman et al. 1995), suggesting that
affect affects abstract word acquisition. When all other factors are controlled, emotional
associations of abstract words facilitate lexical decisions relative to concrete words, re-
versing the traditional concreteness advantage (Kousta et al. 2009). Unlike dual coding
theory (e.g., Paivio 1986) where abstract words are disadvantaged due to their lack of
9. Representing Meaning 199
Despite empirical support for embodiment many issues are still outstanding. First, re-
search needs to go beyond simply showing effects of interaction between linguistic and
sensory-motor stimuli and focus more on describing the nature of this relationship and
the specific mechanisms responsible for these interactions. Simply accumulating evi-
dence for some involvement of sensory-motor systems is unsatisfactory. Interaction ef-
fects between language and sensory-motor processes have been shown to cause both
facilitation and interference effects; the processes underlying these differences need to
be explored. For example, Glenberg and Kaschak (2002) found that semantic judgments
were faster when direction of a physical response matched the direction described in the
language (facilitation) but Kaschak et al. (2006) found slower responses when the direc-
tion of motion of an auditory stimulus matched the direction described in language
(interference). Such opposing results might be explained by properties of the stimuli and
presentation, such as the match in modality of the presented linguistic and perceptual
stimuli, or the timing of presentation. To make progress on understanding the specific
mechanisms underlying these effects, we need to clarify the influence of these variables.
A commonly raised question about simulation is its necessity. Do we need simulation
in order to understand language or is it epiphenomenal (Mahon and Caramazza 2008),
with activation in sensorimotor areas simply the result of spreading activation between
dissociable systems? Looking carefully at the temporal dynamics of interactions between
language and sensorimotor systems could address questions of epiphenomenalism. If
language comprehension necessarily recruits sensorimotor systems, such effects should
be observed very early in processing (Pavan and Baggio 2013).
Depth of processing is a related issue. It is unclear whether simulation occurs under
all circumstances in all language tasks. Simulation may not be necessary for shallow
language tasks, where a good-enough representation could be inferred simply from lin-
guistic information alone, using statistical relations between words (Barsalou et al. 2008;
Louwerse, 2011). Embodied simulations could instead be reserved for deeper processing.
200 I. The Cognitive foundations of language
One aspect of language awaiting future research from this perspective is the learning
process. When and how are words linked with sensory systems? There have been some
attempts to describe this process, for example via Hebbian learning mechanisms under
the combined presence of object naming and the object’s sensory affordances (Pulvermu-
ller 1999; Glenberg and Gallese 2012) or by exploiting iconic mappings between linguis-
tic form and meaning (Perniss et al. 2010).
It is clear that to move forward, embodied theories need to delve deeper into the
mechanisms that underlie the wealth of empirical data and formulate a clear, precise
and testable description of the specific nature of these processes and their temporal
properties.
Fig. 9.3: Approaches to semantic representation. (a) In a semantic network, words are represented
as nodes, and edges indicate semantic relationships. (b) In a semantic space, words are
represented as points, and proximity indicates semantic association. These are the first
two dimensions of a solution produced by latent semantic analysis (Landauer and Dumais,
1997). The black dot is the origin. (c) In the topic model, words are represented as
belonging to a set of probabilistic topics. The matrix shown on the left indicates the
probability of each word under each of three topics. The three columns on the right show
the words that appear in those topics, ordered from highest to lowest probability. Taken
from Griffiths, Steyvers and Tenenbaum (2007). Topics in semantic representation.
Psychological Review, 114 (2), 211–244.
models like LSA or HAL it is presumed that words in similar contexts have related
meanings but it is not specified how these may be defined or described.
While none of these models themselves are developmental in nature (i.e., modeling
language acquisition), as they all compute representations based on a stable input corpus,
they nonetheless can be explicitly applied to developmental processes simply by compar-
ing the representations given different types of language corpora (e.g., comparing statisti-
cal patterns in corpora taken from children versus adults). Furthermore the probabilistic
nature of topic models permits the possibility that distributions of topics, words and
contexts may all change over time. As a result distributional models can be applied
directly, and make predictions relevant to language development in a way that is not
obvious for embodied theories.
by connections to other nodes. The full meaning of a concept arises from the whole
network, beginning from the concept node which alone is meaningless.
In holistic approaches, semantic similarity effects are explained in terms of spreading
activation from an activated node (such as a prime or distractor word) to other concepts
by connections between nodes (e.g., Quillian 1967). Response times in experimental
tasks would be driven by the time it takes a semantically similar node to reach an
activation threshold. As words that are semantically related will be closer together in the
semantic space than semantically unrelated words, activation spreads more quickly from
a related prime to the target word.
In some holistic models, differences between object-nouns and action-verbs have been
modelled in terms of different relational links (e.g., Graesser et al. 1987; Huttenlocher
and Lui 1979). In Wordnet (Miller and Fellbaum 1991) this is represented on a large
scale with four distinct networks representing nouns, verbs, adjectives and adverbs. The
representation of abstract words in Wordnet is no different to more concrete words of the
same grammatical class, although abstract words tend to occur in shallower hierarchies.
Regarding the relationship between words and concepts, a strict one-to-one mapping
is proposed. Each lexical concept is equal to a single, abstract representation in the
conceptual system. This means that conceptual systems must contain representations of
all concepts that are lexicalized in all languages. Any lexical differences that appear
cross-linguistically must be due to conceptual differences. In order to defend the univer-
sality of conceptual structure, one must assume that not all concepts are lexicalized in
each language (see Vigliocco and Filipović 2004).
LSA (Landauer and Dumais 1997), topic model (Griffiths et al. 2007) and HAL (Lund
et al. 1995) have successfully simulated a number of semantic effects including semantic
similarity in semantic priming tasks. Using the word association norms of Nelson et al.
(1998), the topic model successfully predicted associations between words greater than
performance at chance level and outperformed LSA in this as well as a range of other
semantic tasks.
LSA has successfully simulated a number of human cognitive behaviours. For exam-
ple, simulated scores on a standard vocabulary test have been shown to overlap with
human scores and simulations can mimic human word sorting behaviour (Landauer et
al. 1998). If these theories can successfully approximate human language comprehension
then they should be considered valid models of human language processing, reflecting
processes to some extent analogous to human language processing (Landauer and Du-
mais 1997).
Attempts have been made to directly test distributional models and their power to
predict neural activations. For example, Mitchell et al. (2008) found that voxel-level,
item specific fMRI activations for concrete nouns could be predicted on the basis of
distributional statistics based on a large text corpus, and similar data have been obtained
using EEG data (Murphy et al. 2009). Such findings suggest that there is a close relation-
ship between statistical co-occurrences of words in texts and neural activity related to
understanding those words, further supporting the viability of distributional theories.
9. Representing Meaning 203
3.2.4. Looking toward the future: Where should distributional theories go?
Despite the power of distributional models in simulating human behaviour, some have
argued that the statistical patterns that exist in language co-occurrences are merely epi-
phenomenal and play no role in semantic representation (Glenberg and Robertson 2000).
That language-based models do not take into account information from other sources of
meaning, such as perception and introspection, as embodied theories do, is a fundamental
criticism that these approaches need to address. In addition the models cannot account
for existing behavioural and neuroscientific evidence linking language to the brain’s
sensory-motor systems. One can use the famous “Chinese room” example (Searle 1980)
to highlight the importance of this argument: how can meaning be inferred simply from
the relationships that exist between amodal symbols that are themselves void of mean-
ing?
Recently, distributional approaches have been developing in order to solve the
“grounding” problem (Harnad 1990) by including experiential information as another
type of distributional data, bringing together embodied and distributional ideas that have
typically been considered independently. In the next section we will discuss this further.
Meaning in language could be both embodied and language-based, with the contribu-
tion of each system dependent on the language task at hand. Dove (2009) describes the
conceptual system as divided into both modal and amodal representations with each
responsible for different aspects of meaning. For example, it seems impossible that as-
pects of cognition such as logical reasoning or mathematics do not depend at all upon
amodal symbols (Louwerse 2007).
The symbol interdependency theory (Louwerse 2007) describes meaning as composed
of symbols that are dependent on other symbols and symbols that are dependent on
embodied experiences. Here symbols are built upon embodied representations, but al-
though they are grounded, language comprehension can proceed simply via interrelations
amongst other symbols. Using linguistic representations allows for a more “quick and
dirty” response, whereas embodied simulations develop more slowly, accessing a wide
variety of detailed experiential information. Here, two predictions emerge. First, for shal-
low language tasks, involvement of linguistic representations should dominate over em-
bodied representations. Second, for tasks that involve a deeper level of processing, em-
bodied representations should dominate over linguistic. Barsalou et al. (2008) describe
similar ideas with lexical processing incorporating two processes: an early activation of
linguistic representations taking place in language areas of the brain and a later, situated
simulation involving modal systems.
Vigliocco et al. (2009) describe language as another vital source of information, along
with experiential information, from which semantic representations can be learnt. Statis-
tical distributions of words within texts provide important information about meaning
that can be integrated with sensory-motor experience. For example, a child could learn
the meaning of the word dog via experience with dogs’ perceptual features: having four
legs, barking etc., as well as language experience of hearing “dog”: it tends to occur
with words such as pet and animals. Combining both distributions of information allows
linguistic information to “hook up” to the world, thus grounding it.
Modern computational work is also beginning to model semantic meaning by integrat-
ing experiential and linguistic distributional data. It has been shown that models that
combine both types of distributional data perform better in simulating semantic effects
than either distributions alone (Andrews et al. 2009). The underlying principles em-
ployed in distributional models can also be applied to other domains of experience, not
simply linguistic data. Johns and Jones (2012) proposed a model integrating both percep-
tual information (in the form of feature norms) and statistical information from language.
Here, a word’s full meaning is denoted by the concatenation of perceptual and linguistic
vectors. Using a model of global lexical similarity with a simple associative mechanism,
perceptual representations for words for which the model had no perceptual information
could be inferred based on lexical similarity and the limited perceptual information of
other words already existing in the model. Importantly, the inference can also go the
other way, with the likely linguistic structure of a word estimated based on its perceptual
information. Thus the model is able to infer the missing representation of a word based
on either perceptual or linguistic information.
There are some potential shortcomings to current “integrated” models. Since concrete
feature norms are generated by speakers verbally and via introspection, using them as
“embodied information” means there are possible perceptual, sensorimotor and affective
aspects of experiential information that may not be included, suggesting that we cannot
generalize the findings to all word types. However, other methods for appropriately
9. Representing Meaning 205
modelling experiential information are being explored. Recent methods are beginning to
combine information from computer vision with text in distributional models; models
including visual information outperform distributional models based on text only, at least
when vision is relevant to words’ meanings (Bruni et al. 2012a, 2012b). Future work
will need to make use of more sophisticated types of perceptual information, as well as
incorporating other aspects of bodily experience such as action and emotion.
5. Conclusion
The state of the art in cognitive science proposes that the learning and representation of
word meanings involves the statistical combination of experiential information: sensori-
motor and affective information gleaned from experience in the world (extralinguistic),
and distributional linguistic information: statistical patterns occurring within a language
itself (intralinguistic). Research suggests that sensory-motor and affective systems pro-
vide a central role in grounding word meaning in our worldly experiences. This ground-
ing is thought crucial for the language system to learn word meanings from existent
embodied word meanings. The associations between linguistic units allow learners to
more quickly infer word meaning and locate the corresponding experiential information
in the absence of any direct experience of the referent. By learning about word meaning
from both distributions in parallel, ultimately a richer form of semantic information is
gained.
6. References
Andrews, Mark, Gabriella Vigliocco and David P. Vinson
2009 Integrating experiential and distributional data to learn semantic representations. Psycho-
logical Review 116(3): 463−498
Bak Thomas H., Dominic G. O’Donovan, John J. Xuereb, Simon Boniface and John R. Hodges
2001 Selective impairment of verb processing associated with pathological changes in Brod-
mann areas 44 and 45 in the motor neuron disease-dementia-aphasia syndrome. Brain
124: 103−120.
Barsalou, Lawrence W.
1999 Perceptual symbol systems. Brain and Behavioural Sciences 22: 577−660.
Barsalou, Lawrence W., Ava Santos, W. Kyle Simmons and Christine D. Wilson
2008 Language and simulation in conceptual processing. In: M. de Vega, A. M. Glenberg, and
A. C. Graesser (eds.), Symbols, Embodiment and Meaning, 245−283. Oxford: Oxford
University Press.
Barsalou, Lawrence W. and Katja Wiemer-Hastings
2005 Situating abstract concepts. In: D. Pecher and R. A. Zwaan (eds.), Grounding Cognition:
The Role of Perception and Action in Memory, Language, and Thought, 129−163. New
York: Cambridge University Press.
Bergen, Benjamin
this volume 1. Embodiment. Berlin/Boston: De Gruyter Mouton.
Bird, Helen, David Howard and Sue Franklin
2003 Verbs and nouns: The importance of being imageable. Journal of Neurolinguistics
16(2):113−149.
206 I. The Cognitive foundations of language
Bloom, Lois
1998 Language acquisition in its developmental context. In: D. Kuhn and R. S. Siegler (eds.),
Handbook of Child Psychology 2, 309−370. New York: Wiley
Boroditsky, Lera
2000 Metaphoric structuring: Understanding time through spatial metaphors. Cognition 75(1):
1−28.
Boroditsky, Lera, Michael Ramscar and Michael Frank
2001 The roles of body and mind in abstract thought. Proceedings of the 23rd Annual Confer-
ence of the Cognitive Science Society. University of Edinburgh.
Boulenger, Véronique, Laura Mechtouff, Stéphane Thobis, Emmaneul Broussolle, Marc Jeannerod
and Tatjana A. Nazir
2008 Word processing in Parkinson’s disease is impaired for action verbs but not for concrete
noun. Neuropsychologia 46: 743−756.
Bretherton, Inge, and Marjorie Beeghly
1982 Talking about internal states: The acquisition of an explicit theory of mind. Developmen-
tal Psychology 18: 906−921.
Bruni, Elia, Marco Baroni, Jasper Uijlings and Nicu Sebe
2012a Distributional semantics with eyes: Using image analysis to improve computational rep-
resentations of word meaning. Proceedings of the 2th ACM International Conference on
Multimedia, 1219−1228.
Bruni, Elia, Gemma Boleda, Marco Baroni and Nam-Khanh Tran
2012b Distributional semantics in Technicolor. Proceedings of the 50th Annual Meeting of the
Association for Computational Linguistics, 136−145.
Buchanan, Erin M., Jessica L. Holmes, Marilee L. Teasley and Keith A. Hutchinson
2012 English semantic word-pair norms and a searchable Web portal for experimental stimu-
lus creation. Behavior Research Methods 44(4): 746−757.
Collins, Allan M. and Elizabeth F. Loftus
1975 A spreading-activation theory of semantic processing. Psychological Review 82: 407−
428.
Collins, Allan M. and M. Ross Quillian
1969 Retrieval time from semantic memory. Journal of Verbal Learning and Verbal Behavior
12: 240−247.
Connell, Louise and Dermot Lynott
2012 Strength of perceptual experience predicts word processing performance better than con-
creteness or imageability. Cognition 125(3): 452−465
Damasio, Antonio R.
1989 Time-locked multiregional retroactivation: A systems-level proposal for the neural sub-
strates of recall and recognition. Cognition 33: 25−62.
Damasio, Antonio R. and Daniel Tranel
1993 Nouns and verbs are retrieved with differently distributed neural systems. Proceedings
of the National Academy of Sciences Unites States of America, 90: 4957−4960.
Divjak, Dagmar and Catherine Caldwell-Harris
this volume 3. Frequency and entrenchment. Berlin/Boston: De Gruyter Mouton.
Dove, Guy
2009 Beyond conceptual symbols. A call for representational pluralism. Cognition 110: 412−
431.
Evans, Vyvyan
2003 The Structure of Time. Language, Meaning and Temporal Cognition. Amsterdam: Benja-
mins
Farah, Martha J. and James L. McClelland
1991 A computational model of semantic memory impairment: Modality-specificity and emer-
gent category specificity. Journal of Experimental Psychology: General 120: 339−357.
9. Representing Meaning 207
Frege, Gottlob
[1892] 1952 On sense and reference. In: P. T. Geach and M. Black (eds. and Trans.), Philo-
sophical Writings of Gottlob Frege. Oxford: Basil Blackwell.
Geeraerts, Dirk
this volume 13. Lexical semantics. Berlin/Boston: De Gruyter Mouton.
Glenberg, Arthur M.
2011 How reading comprehension is embodied and why that matters. International Electronic
Journal of Elementary Education 4(1): 5−18.
Glenberg, Arthur M. and Vittorio Gallese
2012 Action-based language: a theory of language acquisition, comprehension and production.
Cortex 48(7): 905−922.
Glenberg, Arthur M. and Michael P. Kaschak
2002 Grounding language in action. Psychonomic Bulletin and Review 9: 558−565.
Glenberg, Arthur M. and David A. Robertson.
2000 Symbol grounding and meaning: A comparison of high-dimensional and embodied theo-
ries of meaning. Journal of Memory and Language 43: 379−401.
Gibbs, Raymond W. Jr.
this volume 8. Metaphor. Berlin/Boston: De Gruyter Mouton.
Graesser, Arthur C., Patricia L. Hopkinson and Cheryl Schmid
1987 Differences in interconcept organization between nouns and verbs. Journal of Memory
and Language 26: 242−253.
Gries, Stefan Th.
this volume 22. Polysemy. Berlin/Boston: De Gruyter Mouton.
Griffiths, Thomas L., Mark Steyvers and Joshua B. Tenenbaum
2007 Topics in semantic representation. Psychological Review 114(2): 211−244.
Hall, Richard
1984 Sniglets (snig’lit): Any Word That Doesn’t Appear in the Dictionary, But Should. Collier
Books.
Harnad, Stevan
1990 The symbol grounding problem. Physica 42: 335.
Hauk, Olaf, Ingrid Johnsrude and Friedemann Pulvermuller
2004 Somatotopic representation of action words in human motor and premotor cortex. Neu-
ron 41(2): 301−307.
Huttenlocher, Janellen and Felicia Lui
1979 The semantic organization of some simple nouns and verbs. Journal of Verbal Learning
and Verbal Behavior 18: 141−179.
Johns, Brendan T. and Michael N. Jones
2012 Perceptual inference through global lexical similarity. Topics in Cognitive Science
4:103−120.
Kaschak, Michael P., Rolf A. Zwaan, Mark Aveyard and Richard H. Yaxley
2006 Perception of auditory motion affects language processing. Cognitive Science 30: 733−
744.
Kousta, Stavroula-Thaleia, Gabriella Vigliocco, David P. Vinson, Mark Andrews and Elena Del Campo
2011 The representation of abstract words: why emotion matters. Journal of Experimental
Psychology General 140: 14−34.
Kousta, Stavroula-Thaleia, David P. Vinson, and Gabriella Vigliocco
2009 Emotion words, regardless of polarity, have a processing advantage over neutral words.
Cognition 112(3): 473−481.
Lakoff, George and Mark Johnson
1980 Metaphors We Live By. Chicago: University of Chicago Press.
208 I. The Cognitive foundations of language