RepresentingMeaning SpeedVinsonVigliocco

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/290937612

Representing meaning

Chapter · July 2015

CITATIONS READS

6 7,925

3 authors, including:

Laura Speed Gabriella Vigliocco


Radboud University University College London
45 PUBLICATIONS 450 CITATIONS 202 PUBLICATIONS 12,346 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Laura Speed on 18 January 2016.

The user has requested enhancement of the downloaded file.


9. Representing Meaning 189

Stefanowitsch, Anatol
2011 Cognitive linguistics as cognitive science. In: M. Callies, W. Keller, and A. Lohofer
(eds.), Bi-directionality in the Cognitive Sciences, 295−310. Amsterdam: Benjamins.
Svanlund, Jan
2007 Metaphor and convention. Cognitive Linguistics 18: 47−89.
Sweetser, Eve
1990 From Etymology to Pragmatics: Metaphorical and Cultural Aspects of Semantic Struc-
ture. New York: Cambridge University Press.
Tendahl, Markus
2008 A Hybrid Theory of Metaphor. Basingstoke: Palgrave
Thibodeau Paul and Lera Boroditsky
2011 Metaphors we think with: The role of metaphor in reasoning. PLoS ONE, 6 (2): e16782
Thibodeau, Paul and Frank Durgin
2008 Productive figurative communication: Conventional metaphors facilitate the comprehen-
sion of related novel metaphors. Journal of Memory and Language 58: 521−540.
Turner, Mark
this volume 10. Blending in language and communication. Berlin/Boston: De Gruyter Mou-
ton.3
Vervaeke, John and John Kennedy
1996 Metaphors in language and thought: Disproof and multiple meanings. Metaphor and
Symbolic Activity 11: 273−284.
Williams, Lawrence and John Bargh
2008 Experiencing physical warm influences interpersonal warmth. Science 322: 606−607.
Wilson, Nicole and Raymond Gibbs
2007 Real and imagined body movement primes metaphor comprehension. Cognitive Science
31: 721−731.
Zhong, Chen-Bo and Katie Liljenquist
2006 Washing away your sins: Threatened morality and physical cleansing. Science 313:
1451−1452.

Raymond W. Gibbs, Santa Cruz

9. Representing Meaning
1. Introduction
2. Key issues in semantic representation
3. Theoretical perspectives
4. An integrated proposal: combining language-based and experiential information
5. Conclusion
6. References

3
███
190 I. The Cognitive foundations of language

1. Introduction
Understanding the meaning of words is crucial to our ability to communicate. To do so
we must reliably map the arbitrary form of a spoken, written or signed word to the
corresponding concept whether it is present in the environment, tangible or merely imag-
ined (Meteyard et al. 2012: 2). In this chapter we review two current approaches to
understanding word meaning from a psychological perspective: embodied and distribu-
tional theories. Embodied theories propose that understanding words’ meanings requires
mental simulation of entities being referred to (e.g., Barsalou 1999; see also Bergen this
volume) using the same modality-specific systems involved in perceiving and acting
upon such entities in the world. Distributional theories on the other hand typically de-
scribe meaning in terms of language use: something arising from statistical patterns that
exist amongst words in a language. Instead of focusing on bodily experience, distribu-
tional theories focus upon linguistic data, using statistical techniques to describe words’
meanings in terms of distributions across different linguistic contexts (e.g., Landauer
and Dumais 1997; Griffiths et al. 2007). These two general approaches are traditionally
used in opposition, although this does not need to be the case (Andrews et al. 2009) and
in fact by integrating them we may have better semantic models (Vigliocco et al. 2009).
We will highlight some key issues in lexical representation and processing and de-
scribe historical predecessors for embodied theories (i.e., featural approaches) and distri-
butional theories (i.e., holistic approaches). We conclude by proposing an integrated
model of meaning where embodied and linguistic information are both considered vital
to the representation of words’ meanings.

2. Key issues in semantic representation


A theory of semantic representation must satisfactorily address two key issues: represen-
tation of words from different content domains and the relationship between semantics
(word meaning) and conceptual knowledge.

2.1. Are words from different domains represented in the same way?
The vast majority of research investigating semantic representation has focused on con-
crete nouns. The past decade has seen increasing research into the representation of
action verbs and a beginning of interest in the study of how abstract meaning is represen-
ted. A critical question is whether the same overarching principles can be used across
these domains, or whether organisational principles must differ.
A fundamental difference between objects and actions is that objects can be thought
of in isolation, as discrete entities, but actions are more complex, describing relations
among multiple participants (Vigliocco et al. 2011). Connected to this are temporal dif-
ferences: actions tend to be dynamic events with a particular duration while objects are
stable with long-term states.
Because of the stable nature of objects, nouns’ meanings tend to be relatively fixed.
Verbs’ meanings are less constrained and often more polysemous. These differences
could underscore different representational principles for object-nouns and action-verbs,
9. Representing Meaning 191

but do not preclude a semantic system in which objects and actions are represented in
the same manner and differences in organisation arise from differences in representation-
al content. Such an example is described by Vigliocco et al’s (2004) FUSS model, in
which representations for action and object words are modelled in the same lexico-
semantic space, using the same principles and tools. Differences emerge from differences
in the featural properties of the two domains rather than different principles of organisa-
tion.
When comparing concrete and abstract words, there is a stronger case for assuming
different content and different organisational principles. It is well established that pro-
cessing abstract words takes longer than processing concrete words (the “concreteness
effect”) for which Paivio’s dual-coding theory provides a long-standing account (e.g.,
Paivio 1986). Under this view two separate systems contribute to word meaning: a word-
based system and an image-based system. Whereas concrete words use both systems
(with greater reliance on the latter), abstract words rely solely on word-based informa-
tion. The concreteness effect would occur because concrete words use two systems in-
stead of one, thus having richer and qualitatively different semantic representations than
abstract words.
An alternative view, the context availability theory (Schwanenflugel and Shoben
1983), does not require multiple representational systems to account for the concreteness
effect. Under this view, advantages for concrete words come from differences in associa-
tions between words and previous knowledge (i.e., differences in the number of links,
rather than in content/organisation), with abstract concepts being associated with much
less context. Here the concreteness effect results from the availability of sufficient con-
text for processing concrete concepts in most language situations, but deficient context
for processing abstract words (Schwanenflugel and Shoben 1983).
More recent proposals for the differences between concrete and abstract concepts
and words include viewing abstract knowledge as arising out of metaphorical extension
(Boroditsky 2000; Lakoff and Johnson 1980; Bergen this volume; Gibbs this volume),
or differences in featural properties rather than different principles of organisation for
abstract and concrete meaning: sensorimotor information underlying concrete meanings
and affective and linguistic information underlying abstract meanings (Kousta et al.
2011).
To summarise, theories of semantic representation make different assumptions about
semantic representations for different domains of knowledge, varying from a single,
unitary semantic system to a much more fractionated system, where different principles
of organisation are specified for different word types. However, there exists no strong
evidence for assuming different principles, and following the argument of parsimony,
we argue for a unitary system based on the same principles across domains. Instead of
different organisational principles, differences across domains come about due to differ-
ences in content, namely differences in the extent to which a given type of content is
most important for a given domain: sensory-motor information for the concrete domain
and emotion and linguistic information for the abstract domain (Vigliocco et al. 2009).

2.2. How is conceptual knowledge linked to word meaning?


The fundamental goal of language is to talk about “stuff” such as objects, events, feel-
ings, situations and imaginary worlds. Thus, there must be a strong mapping between
192 I. The Cognitive foundations of language

our conceptual knowledge (the knowledge we use to categorise and understand the
world) and the language we use. Since we begin life exploring and learning about our
world, with language developing later, conceptual knowledge ultimately must develop
before language. One important issue then is how words relate to conceptual knowledge.
Should we think of word meanings and concepts interchangeably? This relationship has
many important implications, for example, the extent of translation equivalency across
languages.
One argument for treating words and concepts interchangeably is that many robust phe-
nomena have been found to affect them both. If the same factors affect both and they behave
similarly, then they must be closely linked, if not interchangeable. For example, feature
type, feature correlations and distinguishing features have been shown to explain category-
specific deficits in categorization of concepts (e.g., McRae and Cree 2002) and semantic
priming effects for words (McRae and Boisvert 1998). Because characteristics of conceptu-
al features seem to have comparable effects it would be parsimonious to consider conceptu-
al representations the same as word meaning (consistent with Langacker 1982).
There are reasons, however to suggest that there is not a one-to-one mapping between
the two. First, we possess far more concepts than words. There are often actions or
situations that we know well and understand that are not lexicalized such as “the actions
of two people manoeuvring for one armrest in a movie theatre or airplane seat” (Hall
1984 discussed in Murphy 2002). Further, one word can be used to refer to multiple
meanings (e.g., polysemy) and so refers to a set of concepts instead of a single concept
(see Gries this volume). This matter is further complicated when we look at cross-
linguistic differences in links between conceptual knowledge and linguistic representa-
tions (see Vigliocco and Filipović 2004).
There are many examples of cross-linguistic differences in semantic representations
that do not have any obvious explanations. For instance, although both English speakers
and Italian speakers use different words to denote foot and leg, Japanese speakers use
the same word ashi to describe both. One could hardly argue that conceptually, Japanese
speakers do not know the difference between one’s foot and one’s leg. If linguistic
categories are based on one-to-one mappings with conceptual structure, then cross-lin-
guistic differences have clear implications for the assumption of universality of conceptu-
al structure.
With the above issues in mind, below we present the two main perspectives on seman-
tic representation, guided by the ideas that the same organising principles apply across
word types and that meaning is distinct from but strongly linked to conceptual know-
ledge (e.g., Vigliocco and Filipović, 2004).

3. Theoretical perspectives
The main theoretical approaches to word meaning can be clustered into those that consid-
er our sensorimotor experience as the building blocks of semantic representation and
those that instead consider statistical patterns in language as the building blocks. This
great divide corresponds to disciplinary boundaries between cognitive psychology and
neuroscience on one side and computational linguistics and computer science on the
other side. Within linguistics, both perspectives are represented as reflecting the distinc-
tion between sense and reference since Frege ([1892] 1952).
9. Representing Meaning 193

3.1. Embodiment

Embodied approaches posit that understanding words’ meanings involves engagement


of the systems used in perception, action and introspection (e.g., Barsalou 1999; Svesson
1999; Evans 2003; Lakoff and Johnson 1999; Bergen this volume). This approach focus-
es on content of semantic representations rather than relationships among them in seman-
tic memory. Embodied theorists argue against “amodal” models of semantics (Figure
9.1a) because they are missing the vital link between meaning in language and experi-
ence in the real world. In other words, it is unclear how the meaning of a word is
understood if language is simply made up of arbitrary symbols not linked to referents
or experiences in the world (Harnad 1990). Here, to understand a word one simulates
its meaning in the brain’s sensorimotor systems, similarly to actually experiencing that
concept. Instead of transducing information from experience into abstract symbols, the
experience itself is, in a way, recreated (Barsalou 1999) (see Figure 9.1b). The distinction
between conception and perception is blurred (Lakoff and Johnson 1999).

3.1.1. Featural theories as precursors to embodiment

Embodiment places emphasis on sensorimotor features as building blocks of meaning.


This emphasis is shared with classic featural theories where a word’s meaning is decom-
posable into a set of defining features (e.g., Collins and Quillian 1969; Rosch and Mervis
1975). Sets of conceptual features are bound together to form a lexical representation of
the word’s meaning. For example, the meaning of chair could be defined by features
including has legs, made of wood and is sat on.
Featural properties of different word categories have been modeled to explain catego-
ry-specific deficits in different forms of brain damage and to shed light on the organisa-
tion of the semantic system (e.g., Farah and McClelland 1991). By looking at the propor-
tion of perceptual (e.g., has fur) and functional (e.g., cuts food) features for artifacts and
natural kinds, Farah and McClelland (1991) described the topographic organisation of
semantic memory in terms of modality rather than category. In their simulations, damage
to perceptual features only caused selective deficits for processing of natural kinds,
whereas conversely, damage to functional features only caused selective deficits for pro-
cessing of artifacts. What was once seen as a category-specific deficit therefore emerged
as a result of damage to specific feature types, suggesting that semantic memory is
organised in terms of sensorimotor features and not categories.
In featural theories, semantic similarity between words can be described in terms of
featural correlations and featural overlap. Both measures have been validated as indica-
tions of semantic similarity in behavioural tasks such as semantic priming (e.g., McRae
and Boisvert 1998). Featural theories have been applied to explain differences between
words referring to objects (nouns) and words referring to events (primarily verbs refer-
ring to actions) in terms of feature types and associations between features. Nouns’
meanings appear to be more differentiated, with dense associations between features and
properties (Tyler et al. 2001) across many different sensory domains (Damasio and Tra-
nel 1993). They also have more specific features referring to narrow semantic fields,
whereas verbs typically consist of features applying broadly across semantic fields and
194 I. The Cognitive foundations of language

Fig. 9.1: Amodal vs. perceptual symbols. Taken from Barsalou, L. W., Simmons, W. K., Barbey,
A., and Wilson, C. D. (2003). Grounding conceptual knowledge in modality-specific
systems. Trends in Cognitive Sciences, 7, 84–91. (a) In amodal symbol systems neural
representations from vision are transduced in an amodal representation such as a frame,
semantic network or feature list. These amodal representations are used during word
understanding. (b) In perceptual symbol systems neural representations from vision are
partially captured by conjunctive neurons, which are later activated during word compre-
hension to re-enact the earlier state.

with less sensory associations (Vinson and Vigliocco 2002). In this sense, verbs could
be considered to be more abstract than nouns (Bird et al. 2003). These differences have
been invoked to account for patients with selective deficits in retrieving and producing
nouns and those who had more problems with verbs (see Vigliocco et al. 2011). It is
questionable whether these theories can be extended to account for differences between
concrete and abstract words. However, a recently published collection of feature norms
found that participants can generate features for abstract words with general agreement
across subjects that could not be explained simply by associations (Buchanan et al.
2012).
Featural theories usually focus on concepts, not words (although concepts and words
are often implicitly or explicitly assumed as the same). There are theories, however,
that assume a separate semantic level where features are bound into a lexico-semantic
representation (Vigliocco et al. 2004), and others that hypothesize “convergence zones”
in the brain where information from multiple modalities is integrated (Damasio 1989,
Simmons and Barsalou 2003; see Vigliocco et al. 2012).
Embodiment theories build upon these earlier accounts, as research that supports
featural representations is necessarily compatible with embodied views. For example,
semantic priming based on overlapping features (McRae and Boisvert 1998) could be
explained by overlap in activation of the same sensorimotor area (e.g., Pecher et al.
2003).
9. Representing Meaning 195

3.1.2. Research supporting embodied theories

A large amount of behavioural evidence demonstrates the use of sensorimotor systems


in language processing, typically with interactions between the processing of words’
semantic content and sensory information (see Bergen this volume). For example, Mete-
yard et al. (2007) showed that visual discrimination of moving dots was hindered when
processing direction verbs of the same direction (e.g., dive, rise). Conversely, lexical
decisions to direction verbs were hindered when participants concurrently perceived mo-
tion of a matching direction at near-threshold levels (Meteyard et al. 2008). If processing
semantic content involves shared sensory-motor systems, then combining word process-
ing and sensory-motor processing should affect performance.
Numerous imaging studies provide support for embodied language processing, show-
ing that areas of the brain involved in perception and action are engaged when processing
words with similar content. For example, listening to verbs related to leg, face or arm
action such as kick, lick and pick activates the motor cortex somatotopically (Hauk et al.
2004). This activation reflects action specificity, for example, a region within the bilateral
inferior parietal lobule showed differential patterns of activation to words of different
levels of specificity e.g., to clean versus to wipe (van Dam et al. 2010), and moreover
is differentially lateralised depending upon participants’ handedness indicating that the
sensorimotor activation underlying word meaning is body-specific (Willems et al. 2010).
Strong evidence for the role of sensorimotor systems in word comprehension comes
from studies in which deficits in motor or sensory processing result in a selective deficit
in word processing of the same category. If sensorimotor systems play a critical role in
semantic representation, damage to these areas should disrupt semantic processing of
those word types. Research of this nature tends to look at patients with impairments in
planning and executing actions, e.g., patients with motor neuron disease (e.g., Bak et
al. 2001) or Parkinson’s disease (e.g., Boulenger et al. 2008). Bak et al. (2001) found
comprehension and production of verbs was significantly more impaired than nouns for
patients with motor neuron disease but not for healthy controls or patients with Alzheim-
er’s disease who have both semantic and syntactic language impairments. This selective
deficit in patients with motor neuron disease suggests that the processes underlying verb
representation is strongly related to those of the motor systems (see Vigliocco et al. 2011
for a review). In addition, transcranial magnetic simulation (TMS) over specific brain
regions has been shown to influence processing of related word types, such as the motor
strip and action verbs (e.g., Pulvermuller et al. 2005).
Critics have argued that embodied results may simply be epiphenomenal: the result
of spreading activation from amodal representations to perceptual areas via indirect,
associative routes due to the correlation between the two (e.g., Mahon and Caramazza
2008). Mahon and Caramazza (2008) argue that existing evidence can be explained by
unembodied theories in which semantic information is independent of sensory-motor
information. The observed interactions could come about indirectly; for example, seman-
tic information may engage working memory systems which in turn recruit sensory-
motor systems (Meteyard et al. 2012: 3). This argument however seems to fall short of
explaining the observed lesion and TMS data. That is, if semantic processing is affected
by disruption of the corresponding sensory-motor areas, then the affected areas must be
a necessary part of semantic representation, and not epiphenomenal. Mahon and Cara-
196 I. The Cognitive foundations of language

mazza’s view is not completely disembodied, but rather falls along a continuum, as we
will describe in the next section.

3.1.3. Different versions of embodiment

Theories of embodiment vary in terms of how strongly they define the role of the sensori-
motor systems in semantic representation. Theories can be considered along a continuum
from strongly embodied (full simulation), through weak embodiment and secondary em-
bodiment, and then moving beyond embodiment to fully symbolic, disembodied theories
(Meteyard et al. 2012; see Figure 9.2). Distributional approaches could be placed on
the extreme, “disembodied” end of the continuum, assigning no role for sensory-motor
information. Theories supporting secondary embodiment still see semantics as amodal
and abstract but propose that semantic representation and sensory-motor information are
directly associated, for example, amodal representations derived from sensory-motor in-
put (Patterson et al. 2007). For weak embodiment, semantic representations are partly
instantiated by sensory-motor information which does have a representational role, but
some degree of abstraction still takes place. Areas adjacent to primary sensory-motor
areas are involved in semantic representation and are reciprocally linked to primary
areas. From a strong embodiment perspective, semantic processing necessarily activates
sensory-motor information and is completely dependent upon it. Here, semantic process-
ing takes place within primary sensory and motor areas and precisely the same systems
are used for semantic processing and sensory-motor processing.
A fully symbolic theory is problematic because there is no link between language
and world knowledge, which raises the grounding problem and the problem of referen-
tiality: how do we understand what words refer to if they are not linked to the world
(Harnad 1990)? Based on the research evidence for sensory-motor activations during
semantic processing (Meteyard et al. 2012), it is clear that sensory-motor systems play
some role in semantic processing. Strong embodiment also appears to be unsatisfactory:
some degree of abstraction must take place in order to extract and combine features into
the correct conceptual conjunctions. Based on evidence from TMS and lesion studies,
weak embodiment, where sensory-motor information plays an integral, representational
role in semantic representation whilst maintaining some degree of abstraction seems the
most plausible choice.

3.1.4. Key issues and embodied theories

Since word meanings appear to produce similar activation patterns to their real-world
referents, different types of words will necessarily have different patterns of activation.
Differences in semantic representations of objects and actions have clearly been demon-
strated with neuropsychology (e.g., Damasio and Tranel 1993) and imaging data (e.g.,
Martin et al. 1995) (for review see Vigliocco et al. 2011). Here, it has generally been
found that processing object-nouns involves activation of posterior sensory cortices
while processing action-verbs involves activation of fronto-parietal motor areas.
9. Representing Meaning

Fig. 9.2: A continuum of embodiment. Adapted from Meteyard, L., Cuadrado, S. R., Bahrami, B. and Vigliocco, G. 2012, Coming of age:
A review of embodiment and the neuroscience of semantics. Cortex, 48(7), 788–804.
197
198 I. The Cognitive foundations of language

Traditionally it has been argued that embodied theories have problems explaining how
abstract concepts are represented. Abstract words pose a special problem to theories of
embodied semantics because their content is not strongly perceptual or motoric, and as
such, it is often argued that their meaning can only be represented in abstract proposition-
al forms (e.g., Noppeney and Price 2004).
There are now a number of alternative (or complementary) hypotheses on embodi-
ment of abstract concepts. One hypothesis is that the meaning of abstract words is under-
stood through metaphorical mappings (Boroditsky 2000; Lakoff and Johnson 1980; see
Gibbs this volume). For example one could conceptualize the mind as a container (Dove
2009) because it holds information. Metaphor allows abstract representations to be based
on extensions of more concrete experience-based concepts grounded in perception and
action. Boroditsky et al. (2001) showed how the abstract concept of time could be em-
bodied using mental representations of the more concrete domain of space (see Evans
this volume for greater discussion on the representation of time). The authors speculated
that the link between the two concepts developed via correspondences between space
and time in experience: moving in space correlates with time. Language then builds upon
these simple correspondences.
Although metaphors highlight similarities between concepts, they do not define the
differences (Dove, 2009): although the mind shares similarities with a container insofar
as it contains information, it is much more than this and this information is not captured
in the metaphor. Additionally, one can think of many aspects of abstract knowledge that
cannot be accounted for by metaphor (Meteyard et al. 2012), such as scientific technical
jargon (but see Glenberg 2011: 15). Although a role for metaphor could be acknowl-
edged, the question is whether metaphorical mappings could really be the foundation of
learning and representation of abstract concepts, or if they just provide structure for
existing concepts (Barsalou 1999).
The difference between concrete and abstract words may arise because of the number
and type of simulations for each word type, similar to differences in context (cf. the
context availability theory, Schwananflugel and Shoben 1983). Abstract words’ mean-
ings would be based on a wider range of simulations than concrete words, and tend to
focus more on social, introspective and affective information than perceptual and motor
(Barsalou and Wiemer-Hasting 2005; Kousta et al. 2011; Connell and Lynott 2012).
Differences arise between the two word types because the type of information and situa-
tions relevant for abstract meaning is more difficult to access.
Kousta et al. (2011) and Vigliocco et al. (2009) described differences between abstract
and concrete concepts as arising from the ecological statistical preponderance of sensory-
motor features in concrete concepts compared to the statistical preponderance of linguis-
tic and especially affective associations for abstract concepts. They argue that affect may
be a critical factor in the learning and representation of abstract knowledge because
abstract words tend to have emotional associations, and because emotional development
precedes language development in children (Bloom 1998). Abstract words with greater
affective associations are acquired earlier with the rate of acquisition rapidly increasing
around age three (Bretherton and Beeghly 1982; Wellman et al. 1995), suggesting that
affect affects abstract word acquisition. When all other factors are controlled, emotional
associations of abstract words facilitate lexical decisions relative to concrete words, re-
versing the traditional concreteness advantage (Kousta et al. 2009). Unlike dual coding
theory (e.g., Paivio 1986) where abstract words are disadvantaged due to their lack of
9. Representing Meaning 199

imageability, emotional processing confers further benefits to abstract words (Vigliocco


et al. 2013).
At present therefore a growing number of studies are attempting to describe embodi-
ment of abstract concepts. Accounts based on metaphor and the range and nature of
simulations successfully explain findings in a number of domains, yet there remain many
more abstract and schematic elements of language which are not easily accounted for.
For example, it is difficult to imagine how simulation can underlie the representation of
abstract and schematic closed-class words such as prepositions and determiners (Mete-
yard et al. 2012), so a completely embodied semantic system seems unlikely.
Do embodied theories make a distinction between word meaning and conceptual
knowledge? In terms of the continuum of embodied theories described above, as one
moves further from abstract/symbolic theories to strong versions of embodiment, the
content of semantic representation includes gradually more sensory-motor information
(Meteyard et al. 2012), blurring the distinction between semantics and conceptual infor-
mation.

3.1.5. Looking toward the future: Where should embodiment go?

Despite empirical support for embodiment many issues are still outstanding. First, re-
search needs to go beyond simply showing effects of interaction between linguistic and
sensory-motor stimuli and focus more on describing the nature of this relationship and
the specific mechanisms responsible for these interactions. Simply accumulating evi-
dence for some involvement of sensory-motor systems is unsatisfactory. Interaction ef-
fects between language and sensory-motor processes have been shown to cause both
facilitation and interference effects; the processes underlying these differences need to
be explored. For example, Glenberg and Kaschak (2002) found that semantic judgments
were faster when direction of a physical response matched the direction described in the
language (facilitation) but Kaschak et al. (2006) found slower responses when the direc-
tion of motion of an auditory stimulus matched the direction described in language
(interference). Such opposing results might be explained by properties of the stimuli and
presentation, such as the match in modality of the presented linguistic and perceptual
stimuli, or the timing of presentation. To make progress on understanding the specific
mechanisms underlying these effects, we need to clarify the influence of these variables.
A commonly raised question about simulation is its necessity. Do we need simulation
in order to understand language or is it epiphenomenal (Mahon and Caramazza 2008),
with activation in sensorimotor areas simply the result of spreading activation between
dissociable systems? Looking carefully at the temporal dynamics of interactions between
language and sensorimotor systems could address questions of epiphenomenalism. If
language comprehension necessarily recruits sensorimotor systems, such effects should
be observed very early in processing (Pavan and Baggio 2013).
Depth of processing is a related issue. It is unclear whether simulation occurs under
all circumstances in all language tasks. Simulation may not be necessary for shallow
language tasks, where a good-enough representation could be inferred simply from lin-
guistic information alone, using statistical relations between words (Barsalou et al. 2008;
Louwerse, 2011). Embodied simulations could instead be reserved for deeper processing.
200 I. The Cognitive foundations of language

One aspect of language awaiting future research from this perspective is the learning
process. When and how are words linked with sensory systems? There have been some
attempts to describe this process, for example via Hebbian learning mechanisms under
the combined presence of object naming and the object’s sensory affordances (Pulvermu-
ller 1999; Glenberg and Gallese 2012) or by exploiting iconic mappings between linguis-
tic form and meaning (Perniss et al. 2010).
It is clear that to move forward, embodied theories need to delve deeper into the
mechanisms that underlie the wealth of empirical data and formulate a clear, precise
and testable description of the specific nature of these processes and their temporal
properties.

3.2. Distributional theories

Distributional theories, traditionally viewed in sharp contrast with embodied theories,


are concerned with statistical patterns found in language itself, such as different types of
texts or documents. Here a word’s meaning is described by its distribution across the
language environment and the mechanisms for learning are clear: words’ meanings are
inferred from the statistical patterns existent in language (see Gries; Geeraerts; and Div-
jak and Caldwell-Harris this volume). Distributional approaches have traditionally as-
signed no role to sensory-motor information, instead using only information present in
linguistic data.
Dominant distributional approaches developed within cognitive science are latent se-
mantic analysis (LSA, Landauer and Dumais 1997), hyperspace analogue to language
(HAL, Lund et al. 1995) and more recently Griffiths et al.’s topic model (e.g., Griffiths
et al. 2007). All these approaches use large samples of text, evaluating properties of the
contexts in which a word appears in order to estimate its relationship to other words,
but differ in the way contexts are treated and the way relationships among words are
assessed (see Riordan and Jones 2010 for a more in-depth review covering a broader
range of distributional models). The topic model does consider words in terms of con-
texts from which they are sampled, but differs to LSA and HAL in its assumptions:
contexts have themselves been sampled from a distribution of latent topics, each of
which is represented as a probability distribution over words (e.g., Griffiths et al. 2007).
The content of a topic is thus represented by those words that it assigned a high probabil-
ity to, so the semantic representation of each word can be considered to be its distribution
over latent topics; and the similarity between two words as similarity in distribution over
topics.
These approaches have successfully simulated many aspects of human behaviour with
the topic model as the most state-of-the-art as it provides a plausible solution to problems
faced by LSA, namely ambiguity, polysemy and homonymy. Words are assigned to
topics and can be represented across many topics with different probabilities so each
sense or meaning of a word can be differentiated. Figure 9.3c shows how the different
meanings of bank occur within two different topics. Words that share a high probability
under the same topics tend to be similar and predictive of each other. A further benefit
is that shared components of meaning are made explicit by providing a precise character-
ization of what “topics” are in terms of probability distributions. In comparison, for
9. Representing Meaning 201

Fig. 9.3: Approaches to semantic representation. (a) In a semantic network, words are represented
as nodes, and edges indicate semantic relationships. (b) In a semantic space, words are
represented as points, and proximity indicates semantic association. These are the first
two dimensions of a solution produced by latent semantic analysis (Landauer and Dumais,
1997). The black dot is the origin. (c) In the topic model, words are represented as
belonging to a set of probabilistic topics. The matrix shown on the left indicates the
probability of each word under each of three topics. The three columns on the right show
the words that appear in those topics, ordered from highest to lowest probability. Taken
from Griffiths, Steyvers and Tenenbaum (2007). Topics in semantic representation.
Psychological Review, 114 (2), 211–244.

models like LSA or HAL it is presumed that words in similar contexts have related
meanings but it is not specified how these may be defined or described.
While none of these models themselves are developmental in nature (i.e., modeling
language acquisition), as they all compute representations based on a stable input corpus,
they nonetheless can be explicitly applied to developmental processes simply by compar-
ing the representations given different types of language corpora (e.g., comparing statisti-
cal patterns in corpora taken from children versus adults). Furthermore the probabilistic
nature of topic models permits the possibility that distributions of topics, words and
contexts may all change over time. As a result distributional models can be applied
directly, and make predictions relevant to language development in a way that is not
obvious for embodied theories.

3.2.1. Holistic theories

Distributional theories developed primarily from computational linguistics. Within psy-


chology, however, these theories have as predecessors holistic theories, and within lin-
guistics, theories of sense relations: concerned with the organisation, or structure of
semantic representations rather than their content, and thus assume concepts are repre-
sented in a unitary fashion.
Holistic theories take a non-decompositional, relational view: the meaning of words
should be evaluated as a whole, in terms of relations between words, rather than being
decomposed into smaller components (such as features). Words take their meaning from
relationships with other words, for example by associative links. In early theories of this
type, meaning was described by semantic networks (e.g., Quillian 1968; Collins and
Loftus 1975) where a word was denoted by a single node in a network and its meaning
202 I. The Cognitive foundations of language

by connections to other nodes. The full meaning of a concept arises from the whole
network, beginning from the concept node which alone is meaningless.
In holistic approaches, semantic similarity effects are explained in terms of spreading
activation from an activated node (such as a prime or distractor word) to other concepts
by connections between nodes (e.g., Quillian 1967). Response times in experimental
tasks would be driven by the time it takes a semantically similar node to reach an
activation threshold. As words that are semantically related will be closer together in the
semantic space than semantically unrelated words, activation spreads more quickly from
a related prime to the target word.
In some holistic models, differences between object-nouns and action-verbs have been
modelled in terms of different relational links (e.g., Graesser et al. 1987; Huttenlocher
and Lui 1979). In Wordnet (Miller and Fellbaum 1991) this is represented on a large
scale with four distinct networks representing nouns, verbs, adjectives and adverbs. The
representation of abstract words in Wordnet is no different to more concrete words of the
same grammatical class, although abstract words tend to occur in shallower hierarchies.
Regarding the relationship between words and concepts, a strict one-to-one mapping
is proposed. Each lexical concept is equal to a single, abstract representation in the
conceptual system. This means that conceptual systems must contain representations of
all concepts that are lexicalized in all languages. Any lexical differences that appear
cross-linguistically must be due to conceptual differences. In order to defend the univer-
sality of conceptual structure, one must assume that not all concepts are lexicalized in
each language (see Vigliocco and Filipović 2004).

3.2.2. Research supporting distributional theories

LSA (Landauer and Dumais 1997), topic model (Griffiths et al. 2007) and HAL (Lund
et al. 1995) have successfully simulated a number of semantic effects including semantic
similarity in semantic priming tasks. Using the word association norms of Nelson et al.
(1998), the topic model successfully predicted associations between words greater than
performance at chance level and outperformed LSA in this as well as a range of other
semantic tasks.
LSA has successfully simulated a number of human cognitive behaviours. For exam-
ple, simulated scores on a standard vocabulary test have been shown to overlap with
human scores and simulations can mimic human word sorting behaviour (Landauer et
al. 1998). If these theories can successfully approximate human language comprehension
then they should be considered valid models of human language processing, reflecting
processes to some extent analogous to human language processing (Landauer and Du-
mais 1997).
Attempts have been made to directly test distributional models and their power to
predict neural activations. For example, Mitchell et al. (2008) found that voxel-level,
item specific fMRI activations for concrete nouns could be predicted on the basis of
distributional statistics based on a large text corpus, and similar data have been obtained
using EEG data (Murphy et al. 2009). Such findings suggest that there is a close relation-
ship between statistical co-occurrences of words in texts and neural activity related to
understanding those words, further supporting the viability of distributional theories.
9. Representing Meaning 203

3.2.3. Key issues and distributional theories

In comparison to earlier relational approaches, relations between different word types


here are not pre-specified; instead the same principles are used for all word types. Differ-
ences between word types such as noun-verb differences and concrete-abstract differen-
ces are captured in the relationships that result from these statistical models, patterns
that exist in the source texts. Thus, distributional models have no problem defining all
domains, as long as they are represented in the source texts.
The relationship between word meaning and conceptual knowledge is not explicitly
discussed by these theories, and they are therefore implicitly assumed to be the same.
The lack of connection between words and sensory-motor experience is a strong limita-
tion of distributional models, as discussed below.

3.2.4. Looking toward the future: Where should distributional theories go?

Despite the power of distributional models in simulating human behaviour, some have
argued that the statistical patterns that exist in language co-occurrences are merely epi-
phenomenal and play no role in semantic representation (Glenberg and Robertson 2000).
That language-based models do not take into account information from other sources of
meaning, such as perception and introspection, as embodied theories do, is a fundamental
criticism that these approaches need to address. In addition the models cannot account
for existing behavioural and neuroscientific evidence linking language to the brain’s
sensory-motor systems. One can use the famous “Chinese room” example (Searle 1980)
to highlight the importance of this argument: how can meaning be inferred simply from
the relationships that exist between amodal symbols that are themselves void of mean-
ing?
Recently, distributional approaches have been developing in order to solve the
“grounding” problem (Harnad 1990) by including experiential information as another
type of distributional data, bringing together embodied and distributional ideas that have
typically been considered independently. In the next section we will discuss this further.

4. An integrated proposal: Combining language-based and


experiential information
Despite the apparent divide between embodied, experiential theories and amodal, distri-
butional theories, these two types of information can be integrated to form a more general
model of semantic representation. While maintaining a role for sensorimotor information
in learning, linguistic information also plays a role. We have all used dictionaries to
learn a word’s meaning as well as inferring a new word’s meaning from its linguistic
context alone. The environment contains a rich source of both embodied and linguistic
data: we experience words both in a physical environment and a language environment
rather than one or the other. As Louwerse (2007) notes, the question should not be
whether semantics is embodied or symbolic, but rather, to what extent is language com-
prehension embodied and symbolic?
204 I. The Cognitive foundations of language

Meaning in language could be both embodied and language-based, with the contribu-
tion of each system dependent on the language task at hand. Dove (2009) describes the
conceptual system as divided into both modal and amodal representations with each
responsible for different aspects of meaning. For example, it seems impossible that as-
pects of cognition such as logical reasoning or mathematics do not depend at all upon
amodal symbols (Louwerse 2007).
The symbol interdependency theory (Louwerse 2007) describes meaning as composed
of symbols that are dependent on other symbols and symbols that are dependent on
embodied experiences. Here symbols are built upon embodied representations, but al-
though they are grounded, language comprehension can proceed simply via interrelations
amongst other symbols. Using linguistic representations allows for a more “quick and
dirty” response, whereas embodied simulations develop more slowly, accessing a wide
variety of detailed experiential information. Here, two predictions emerge. First, for shal-
low language tasks, involvement of linguistic representations should dominate over em-
bodied representations. Second, for tasks that involve a deeper level of processing, em-
bodied representations should dominate over linguistic. Barsalou et al. (2008) describe
similar ideas with lexical processing incorporating two processes: an early activation of
linguistic representations taking place in language areas of the brain and a later, situated
simulation involving modal systems.
Vigliocco et al. (2009) describe language as another vital source of information, along
with experiential information, from which semantic representations can be learnt. Statis-
tical distributions of words within texts provide important information about meaning
that can be integrated with sensory-motor experience. For example, a child could learn
the meaning of the word dog via experience with dogs’ perceptual features: having four
legs, barking etc., as well as language experience of hearing “dog”: it tends to occur
with words such as pet and animals. Combining both distributions of information allows
linguistic information to “hook up” to the world, thus grounding it.
Modern computational work is also beginning to model semantic meaning by integrat-
ing experiential and linguistic distributional data. It has been shown that models that
combine both types of distributional data perform better in simulating semantic effects
than either distributions alone (Andrews et al. 2009). The underlying principles em-
ployed in distributional models can also be applied to other domains of experience, not
simply linguistic data. Johns and Jones (2012) proposed a model integrating both percep-
tual information (in the form of feature norms) and statistical information from language.
Here, a word’s full meaning is denoted by the concatenation of perceptual and linguistic
vectors. Using a model of global lexical similarity with a simple associative mechanism,
perceptual representations for words for which the model had no perceptual information
could be inferred based on lexical similarity and the limited perceptual information of
other words already existing in the model. Importantly, the inference can also go the
other way, with the likely linguistic structure of a word estimated based on its perceptual
information. Thus the model is able to infer the missing representation of a word based
on either perceptual or linguistic information.
There are some potential shortcomings to current “integrated” models. Since concrete
feature norms are generated by speakers verbally and via introspection, using them as
“embodied information” means there are possible perceptual, sensorimotor and affective
aspects of experiential information that may not be included, suggesting that we cannot
generalize the findings to all word types. However, other methods for appropriately
9. Representing Meaning 205

modelling experiential information are being explored. Recent methods are beginning to
combine information from computer vision with text in distributional models; models
including visual information outperform distributional models based on text only, at least
when vision is relevant to words’ meanings (Bruni et al. 2012a, 2012b). Future work
will need to make use of more sophisticated types of perceptual information, as well as
incorporating other aspects of bodily experience such as action and emotion.

5. Conclusion
The state of the art in cognitive science proposes that the learning and representation of
word meanings involves the statistical combination of experiential information: sensori-
motor and affective information gleaned from experience in the world (extralinguistic),
and distributional linguistic information: statistical patterns occurring within a language
itself (intralinguistic). Research suggests that sensory-motor and affective systems pro-
vide a central role in grounding word meaning in our worldly experiences. This ground-
ing is thought crucial for the language system to learn word meanings from existent
embodied word meanings. The associations between linguistic units allow learners to
more quickly infer word meaning and locate the corresponding experiential information
in the absence of any direct experience of the referent. By learning about word meaning
from both distributions in parallel, ultimately a richer form of semantic information is
gained.

6. References
Andrews, Mark, Gabriella Vigliocco and David P. Vinson
2009 Integrating experiential and distributional data to learn semantic representations. Psycho-
logical Review 116(3): 463−498
Bak Thomas H., Dominic G. O’Donovan, John J. Xuereb, Simon Boniface and John R. Hodges
2001 Selective impairment of verb processing associated with pathological changes in Brod-
mann areas 44 and 45 in the motor neuron disease-dementia-aphasia syndrome. Brain
124: 103−120.
Barsalou, Lawrence W.
1999 Perceptual symbol systems. Brain and Behavioural Sciences 22: 577−660.
Barsalou, Lawrence W., Ava Santos, W. Kyle Simmons and Christine D. Wilson
2008 Language and simulation in conceptual processing. In: M. de Vega, A. M. Glenberg, and
A. C. Graesser (eds.), Symbols, Embodiment and Meaning, 245−283. Oxford: Oxford
University Press.
Barsalou, Lawrence W. and Katja Wiemer-Hastings
2005 Situating abstract concepts. In: D. Pecher and R. A. Zwaan (eds.), Grounding Cognition:
The Role of Perception and Action in Memory, Language, and Thought, 129−163. New
York: Cambridge University Press.
Bergen, Benjamin
this volume 1. Embodiment. Berlin/Boston: De Gruyter Mouton.
Bird, Helen, David Howard and Sue Franklin
2003 Verbs and nouns: The importance of being imageable. Journal of Neurolinguistics
16(2):113−149.
206 I. The Cognitive foundations of language

Bloom, Lois
1998 Language acquisition in its developmental context. In: D. Kuhn and R. S. Siegler (eds.),
Handbook of Child Psychology 2, 309−370. New York: Wiley
Boroditsky, Lera
2000 Metaphoric structuring: Understanding time through spatial metaphors. Cognition 75(1):
1−28.
Boroditsky, Lera, Michael Ramscar and Michael Frank
2001 The roles of body and mind in abstract thought. Proceedings of the 23rd Annual Confer-
ence of the Cognitive Science Society. University of Edinburgh.
Boulenger, Véronique, Laura Mechtouff, Stéphane Thobis, Emmaneul Broussolle, Marc Jeannerod
and Tatjana A. Nazir
2008 Word processing in Parkinson’s disease is impaired for action verbs but not for concrete
noun. Neuropsychologia 46: 743−756.
Bretherton, Inge, and Marjorie Beeghly
1982 Talking about internal states: The acquisition of an explicit theory of mind. Developmen-
tal Psychology 18: 906−921.
Bruni, Elia, Marco Baroni, Jasper Uijlings and Nicu Sebe
2012a Distributional semantics with eyes: Using image analysis to improve computational rep-
resentations of word meaning. Proceedings of the 2th ACM International Conference on
Multimedia, 1219−1228.
Bruni, Elia, Gemma Boleda, Marco Baroni and Nam-Khanh Tran
2012b Distributional semantics in Technicolor. Proceedings of the 50th Annual Meeting of the
Association for Computational Linguistics, 136−145.
Buchanan, Erin M., Jessica L. Holmes, Marilee L. Teasley and Keith A. Hutchinson
2012 English semantic word-pair norms and a searchable Web portal for experimental stimu-
lus creation. Behavior Research Methods 44(4): 746−757.
Collins, Allan M. and Elizabeth F. Loftus
1975 A spreading-activation theory of semantic processing. Psychological Review 82: 407−
428.
Collins, Allan M. and M. Ross Quillian
1969 Retrieval time from semantic memory. Journal of Verbal Learning and Verbal Behavior
12: 240−247.
Connell, Louise and Dermot Lynott
2012 Strength of perceptual experience predicts word processing performance better than con-
creteness or imageability. Cognition 125(3): 452−465
Damasio, Antonio R.
1989 Time-locked multiregional retroactivation: A systems-level proposal for the neural sub-
strates of recall and recognition. Cognition 33: 25−62.
Damasio, Antonio R. and Daniel Tranel
1993 Nouns and verbs are retrieved with differently distributed neural systems. Proceedings
of the National Academy of Sciences Unites States of America, 90: 4957−4960.
Divjak, Dagmar and Catherine Caldwell-Harris
this volume 3. Frequency and entrenchment. Berlin/Boston: De Gruyter Mouton.
Dove, Guy
2009 Beyond conceptual symbols. A call for representational pluralism. Cognition 110: 412−
431.
Evans, Vyvyan
2003 The Structure of Time. Language, Meaning and Temporal Cognition. Amsterdam: Benja-
mins
Farah, Martha J. and James L. McClelland
1991 A computational model of semantic memory impairment: Modality-specificity and emer-
gent category specificity. Journal of Experimental Psychology: General 120: 339−357.
9. Representing Meaning 207

Frege, Gottlob
[1892] 1952 On sense and reference. In: P. T. Geach and M. Black (eds. and Trans.), Philo-
sophical Writings of Gottlob Frege. Oxford: Basil Blackwell.
Geeraerts, Dirk
this volume 13. Lexical semantics. Berlin/Boston: De Gruyter Mouton.
Glenberg, Arthur M.
2011 How reading comprehension is embodied and why that matters. International Electronic
Journal of Elementary Education 4(1): 5−18.
Glenberg, Arthur M. and Vittorio Gallese
2012 Action-based language: a theory of language acquisition, comprehension and production.
Cortex 48(7): 905−922.
Glenberg, Arthur M. and Michael P. Kaschak
2002 Grounding language in action. Psychonomic Bulletin and Review 9: 558−565.
Glenberg, Arthur M. and David A. Robertson.
2000 Symbol grounding and meaning: A comparison of high-dimensional and embodied theo-
ries of meaning. Journal of Memory and Language 43: 379−401.
Gibbs, Raymond W. Jr.
this volume 8. Metaphor. Berlin/Boston: De Gruyter Mouton.
Graesser, Arthur C., Patricia L. Hopkinson and Cheryl Schmid
1987 Differences in interconcept organization between nouns and verbs. Journal of Memory
and Language 26: 242−253.
Gries, Stefan Th.
this volume 22. Polysemy. Berlin/Boston: De Gruyter Mouton.
Griffiths, Thomas L., Mark Steyvers and Joshua B. Tenenbaum
2007 Topics in semantic representation. Psychological Review 114(2): 211−244.
Hall, Richard
1984 Sniglets (snig’lit): Any Word That Doesn’t Appear in the Dictionary, But Should. Collier
Books.
Harnad, Stevan
1990 The symbol grounding problem. Physica 42: 335.
Hauk, Olaf, Ingrid Johnsrude and Friedemann Pulvermuller
2004 Somatotopic representation of action words in human motor and premotor cortex. Neu-
ron 41(2): 301−307.
Huttenlocher, Janellen and Felicia Lui
1979 The semantic organization of some simple nouns and verbs. Journal of Verbal Learning
and Verbal Behavior 18: 141−179.
Johns, Brendan T. and Michael N. Jones
2012 Perceptual inference through global lexical similarity. Topics in Cognitive Science
4:103−120.
Kaschak, Michael P., Rolf A. Zwaan, Mark Aveyard and Richard H. Yaxley
2006 Perception of auditory motion affects language processing. Cognitive Science 30: 733−
744.
Kousta, Stavroula-Thaleia, Gabriella Vigliocco, David P. Vinson, Mark Andrews and Elena Del Campo
2011 The representation of abstract words: why emotion matters. Journal of Experimental
Psychology General 140: 14−34.
Kousta, Stavroula-Thaleia, David P. Vinson, and Gabriella Vigliocco
2009 Emotion words, regardless of polarity, have a processing advantage over neutral words.
Cognition 112(3): 473−481.
Lakoff, George and Mark Johnson
1980 Metaphors We Live By. Chicago: University of Chicago Press.
208 I. The Cognitive foundations of language

Lakoff, George and Mark Johnson


1999 Philosophy in the Flesh: The Embodied Mind and its Challenge to Western Thought.
New York: Basic Books
Landauer, Thomas K. and Susan T. Dumais
1997 A solution to Plato’s problem: The Latent Semantic Analysis theory of the acquisition,
induction, and representation of knowledge. Psychological Review 104: 211−140.
Landauer, Thomas K., Peter W. Foltz and Darrell Laham
1998 Introduction to Latent Semantic Analysis. Discourse Processes 25: 259−284.
Langacker, R. W.
1982 Foundations of Cognitive Grammar, Volume 1, Theoretical Prerequisites. Stanford:
Stanford University Press.
Louwerse, Max M.
2007 Symbolic or embodied representations: A case for symbol interdependency. In: T. Lan-
dauer, D. McNamara, S. Dennis, and W. Kintsch (eds.), Handbook of Latent Semantic
Analysis, 107−120. Mahwah: Erlbaum.
Louwerse, Max M.
2011 Stormy seas and cloudy skies: conceptual processing is (still) linguistic and perceptual.
Frontiers in Psychology 2(105): 1−4.
Lund, Kevin, Curt Burgess and Ruth A. Atchley
1995 Semantic and associative priming in high-dimensional semantic space. In: J. D. Moore
and J. F. Lehman (eds.), Proceedings of the 17th Annual Meeting of the Cognitive Science
Society, 17: 660−665.
Mahon, Bradford Z. and Alfonso Caramazza
2008 A critical look at the Embodied Cognition Hypothesis and a new proposal for grounding
conceptual content. Journal of Physiology − Paris 102: 59−70.
Martin, Alex, James V. Haxby, Francoise M. Lalonde, Cheri L. Wiggs and Leslie G. Ungerleider
1995 Discrete cortical regions associated with knowledge of color and knowledge of action.
Science 270(5233): 102−105.
McRae, Ken and Stephen Boisvert
1998 Automatic semantic similarity priming. Journal of Experimental Psychology: Learning,
Memory and Cognition 24: 558−572a.
McRae, K., and George S. Cree
2002 Factors underlying category-specific semantic deficits. In: E. M. E. Forde and G. W.
Humphreys (eds.), Category-Specificity in Brain and Mind, 211−250. East Sussex, UK:
Psychology Press.
Meteyard, Lotte, Bahador Bahrami and Gabriella Vigliocco
2007 Motion detection and motion verbs. Psychological Science 18(11): 1007−1013.
Meteyard, Lotte, Sara. R. Rodriguez Cuadrado, Bahador Bahrami and Gabriella Vigliocco
2012 Coming of age: A review of embodiment and the neuroscience of semantics. Cortex
48(7): 788−804.
Meteyard, Lotte, Nahid Zokaei, Bahador Bahrami and Gabriella Vigliocco
2008 Visual motion interferes with lexical decision on motion words. Current Biology 18(17):
732−733.
Miller, George A. and Christiane Fellbaum
1991 Semantic networks of English. Cognition 41: 197−229.
Mitchell, Tom M., Svletlana V. Shinkareva, Andrew Carlson, Kai-Min Chang, Vicente L. Malave,
Robert A. Mason and Marcel A. Just
2008 Predicting human brain activity associated with the meanings of nouns. Science 320:
1191.
Murphy, Gregory L.
2002 The Big Book of Concepts. Cambridge: MIT Press.
9. Representing Meaning 209

Murphy, Brian, Marco Baroni and Massimo Poesio


2009 EEG responds to conceptual stimuli and corpus semantics. Proceedings of the Confer-
ence on Empirical Methods in Natural Language Processing (EMNLP 2009), 619−627.
East Stroudsburg: ACL.
Nelson, Douglas L., Cathy L. McEvory and Thomas A. Schreiber
1998 The University of South Florida word association, rhyme, and word fragment norms.
http://www.usf.edu/FreeAssociation/
Noppeney, Uta and Cathy J. Price
2004 Retrieval of abstract semantics. Neuroimage 22: 164−170.
Paivio, Allan
1986 Mental Representations: A Dual-Coding Approach. Oxford: Oxford University Press.
Patterson, Karalyn, Peter J. Nestor and Timothy T. Rogers
2007 Where do you know what you know? The representation of semantic knowledge in the
human brain. Nature Reviews Neuroscience 8: 976−987.
Pavan, Andrea and Giosuè Baggio
2013 Linguistic representations of motion do not depend on the visual motion system. Psycho-
logical Science 24: 181−188.
Pecher, Diane, René Zeelenberg and Lawrence W. Barsalou
2003 Verifying different-modality properties for concepts produces switching costs. Psycho-
logical Science 14(2): 119−124.
Perniss, Pamela, Robin L. Thompson and Gabriella Vigliocco
2010 Iconicity as a general property of language: Evidence from spoken and signed languages.
Frontiers in Psychology 1: 227.
Pulvermuller, Friedemann
1999 Words in the brain’s language. Behavioral and Brain Sciences 22: 253−336.
Pulvermuller, Friedemann, Olaf Hauk, Vadim V. Nikulin and Risto J. Ilmoniemi
2005 Functional links between motor and language systems, European Journal of Neurosci-
ence 21(3): 793−797.
Quillian, M. Ross
1967 Word concepts: A theory and simulation of some basic semantic capabilities. Behaviour-
al Science 12: 410−430.
Quillian, M. Ross
1968 Semantic memory. In: M. Minsky (ed.), Semantic Information Processing, 227−270.
Cambridge: MIT Press.
Riordan, B., and M. N. Jones
2010 Redundancy in linguistic and perceptual experience: Comparing distributional and fea-
ture-based models of semantic representation. Topics in Cognitive Science 3(2): 303−
345.
Rosch, Eleanor and Carolyn B. Mervis
1975 Family resemblance: Studies in the internal structure of categories. Cognitive Psychology
7: 573−605.
Santos, Ava, Sergio E. Chaigneau, W. Kyle Simmons, and Lawrence W. Barsalou
2011 Property generation reflects word association and situated simulation. Language and
Cognition 3: 83−119.
Schwanenflugel, Paula J. and Edward J. Shoben
1983 Differential context effects in the comprehension of abstract and concrete verbal materi-
als. Journal of Experimental Psychology: Learning, Memory, and Cognition 9(1): 82−
102.
Searle, John
1980 Minds, brains and programs. Behavioral and Brain Sciences 3(3): 417−457.
210 I. The Cognitive foundations of language

Simmons, W. Kyle, and Lawrence W. Barsalou


2003 The similarity-in-topography principle: Reconciling theories of conceptual deficits. Cog-
nitive Neuropsychology 20: 451−486.
Svensson, Patrik
1999 Number and Countability in English Nouns. An Embodied Model. Uppsala: Swedish
Science Press.
Tyler, Lorraine K., Richard Russell, Jalal Fadili and Helen E. Moss
2001 The neural representation of nouns and verbs: PET studies. Brain 124: 1619−1634.
van Dam, Wessel O., Shirley-Ann Rueschemeyer and Harold Bekkering
2010 How specifically are action verbs represented in the neural motor system: an fMRI
study. Neuroimage 53: 1318−1325.
Vigliocco, Gabriella and Luna Filipović
2004 From mind in the mouth to language in the mind. Trends in Cognitive Sciences 8: 5−7.
Vigliocco, Gabriella, Stavroula-Thaleia Kousta, David P. Vinson, Mark Andrews and Elena Del Campo
2013 The representation of abstract words: What matters? A reply to Paivio. Journal of Ex-
perimental Psychology: General 142: 288−291.
Vigliocco, Gabriella, Lotte Meteyard, Mark Andrews and Stavroula-Thaleia Kousta
2009 Toward a theory of semantic representation. Language and Cognition 1: 215−244.
Vigliocco, Gabriella, Daniel Tranel and Judit Druks
2012 Language production: patient and imaging research. In: M. Spivey, K. McRae and M.
Joanisse (eds.), Cambridge Handbook of Psycholinguistics, 443−464. Cambridge: Cam-
bridge University Press.
Vigliocco, Gabriella, David Vinson, Judit Druks, Horacio Barber and Stefano F. Cappa
2011 Nouns and verbs in the brain: A review of behavioural, electrophysiological, neuropsy-
chological and imaging studies. Neuroscience and Biobehavioural Reviews 35: 407−
426.
Vigliocco, Gabriella, David P. Vinson, William Lewis and Merrill F. Garrett
2004 Representing the meaning of object and action words: The featural and unitary semantic
space hypothesis. Cognition 48: 422−488.
Vinson, David P. and Gabriella Vigliocco
2002 A semantic analysis of noun-verb dissociations in aphasia. Journal of Neurolinguistics
15: 317−351.
Wellman, Henry M., Paul L. Harris, Mita Banerjee and Anna Sinclair
1995 Early understanding of emotion: Evidence from natural language. Cognition and Emo-
tion 9: 117−149.
Willems, Roel M., Peter Hagoort and Daniel Casasanto
2010 Body-specific representations of action verbs: Neural evidence from right- and left-
handers. Psychological Science 21(1): 67−74.

Laura J. Speed, David P. Vinson and Gabriella Vigliocco, London (UK).

View publication stats

You might also like