Sage Language

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Language

All human communities have, and use, language. Language allows humans to refer to
objects, properties, actions, abstract entities, and other aspects of the world, and to convey
and retrieve thoughts in a way that seems both fast and effortless. Both in terms of its
complexity and internal structure and in terms of its expressive power, human language is
well beyond any communicative system available to nonhumans. Below we survey some
basic empirical evidence and theorizing about the nature and properties of human language,
the way language is produced and understood, and the way language is acquired by
children.

The Nature of Language


Even though there are about 4000 languages in the world today, they all share major
design features which characterize the human faculty of language in general. One such
design feature is creativity: speakers of a language can produce and understand sentences
that they have never uttered or heard before. For instance, it is possible to understand the
meaning of the sentence Napoleon never went to the moon (and agree that the sentence
expresses something true) without ever having encountered this sentence before. This fact
shows that the human language ability does not rest on memorizing and storing linguistic
strings but rather involves drawing on the finite number of items in one’s vocabulary and
combining them in novel and systematic ways to form a potentially infinite number of new
sentences.
A second design feature of language is structure: structural principles and rules constrain
the kind of sentences that speakers can generate. Since language involves several
interconnected levels, from sound to syntax to meaning, each level is governed by a
specialized set of rules. For instance, one rule governing the sound structure of English
specifies that the sequence tl cannot be used in the beginning of a new word (this is why
Tlong is an unlikely name for a new cartoon character but Klong is not). Another kind of rule
specifies the relative position of adjectives and nouns (red umbrella is an acceptable English
phrase but umbrella red is not). A major goal for formal theories of language has been to
discover the full set of principles underlying human languages. Following the work of Noam
Chomsky, a further goal for many theorists has been to identify a core set of such basic
principles (otherwise known as Universal Grammar) which can serve as the innate basis for
all human languages.

The Production and Comprehension of Language


Humans can comfortably produce speech at the rate of 4 words per second, and
comprehension follows the speed of production. Even though speaking and understanding
speech seem effortless, both processes are supported by complex cognitive machinery.
Speaking involves multiple overlapping stages beginning with the intention to convey a
message and ending with the formulation and execution of a sentence encoding that
message. Speakers work in a top-down way, first deciding what they want to express, then
choosing words and structures to communicate their message, and finally programming the
implementation of the linguistic stimulus on the sound level so they can articulate it. The
least well understood of these phases is the process of message preparation: while it is clear
that this stage includes non-linguistic apprehension of events and objects (and therefore
interfaces with perceptual/conceptual representations of the world), the precise form of these
non-linguistic representations remains elusive. More is known about the processes
underlying the selection and assembly of words and sounds of a sentence. In fact, evidence
from several laboratories points to some degree of independence between the word- and
sound-combining levels of speech planning: for instance, the tip-of-the-tongue phenomenon
(the familiar sense that we cannot retrieve a word that we know) reveals that speakers can
access information about the grammatical class of a word (e.g., whether it is a noun or verb)
without accessing information about how the word sounds. Further evidence for the systems
underlying language production comes from speech errors (or slips of the tongue). Such
errors reveal rule-like constraints in how words or sounds are arranged when we prepare
speech. For instance, when words switch places during a slip of the tongue, they almost
always exchange with other words from the same grammatical class (nouns, verbs, etc.;
e.g., swimmers sink becomes swimmers drown). When sounds switch places, they almost
always exchange with other sounds of the same class of linguistic sounds (vowels or
consonants; e.g., snow flurries becomes flow snurries). Thus even speech errors seem to
involve principled choices over abstract representations on the word or sound level – thereby
offering indirect evidence for similar mechanisms in regular, error-free production. Such
errors further demonstrate that speaking involves the active construction of utterances ‘on
the fly’ from smaller linguistic units.
Language comprehension (or parsing, as it is often called), unlike production, might
appear to work in a bottom-up way: hearers are often thought to start with the sounds they
hear, then identify the words and group them into structures, and proceed to infer what the
speaker meant by a sentence. However, there is evidence of top-down influences on
parsing, since hearers may bring real-world knowledge or expectations about what speakers
are trying to do to bear on the ongoing interpretation of an utterance. For instance, when
people are placed at a table with several objects on it and are asked Pick up the candle, they
move their eye gaze to the candle before they reach to perform the requested action. These
spontaneous eye movements are a good measure of people’s interpretation of incoming
linguistic material. It turns out that hearers start looking at the candle approximately 50 msec
before the end of the word candle. But if there is candy on the table along with the candle,
eye movements to the candle start only 30 msec after the end of the candle. This shows that
auditory information as well as contextual information (the specific objects in a scene) affects
word identification.
Similar evidence exists for structure identification. Consider a sentence that begins as
follows: Put the apple on the towel…. This may mean either that the apple should be placed
on the towel, or that the apple that is on the towel should be placed in some location (to be
specified in the rest of the sentence). In the absence of context, listeners prefer the first
interpretation (e.g., if the sentence continues with …into the box, listeners become
temporarily confused). But when placed at a table with two apples, one of which is on a
towel and the other on a napkin, listeners’ eye movements show that they access the second
interpretation from the beginning (sentence continuations with …into the box cause no
comprehension problems). During the interpretation of a sentence, then, syntactic, lexical
and contextual factors are rapidly integrated and affect the process of grouping words into
linguistic structures.
As this and other experimental evidence shows, language comprehension is incremental:
when hearers encounter a sentence, they begin to incorporate incoming words into a
growing, richly structured representation of the sentence (rather than store them as a list and
wait for the end of the sentence to recover the structure). Furthermore, hearers attempt to
connect this representation of the sentence to the world around them. The precise way in
which syntactic, lexical and contextual factors impact parsing, as well as the way in which the
language comprehension processes co-ordinate with the systems underlying language
production, remain major questions in the psychology of language.

The Acquisition of Language


All human beings, under normal rearing circumstances, acquire language. Language
learning begins at birth, if not earlier. Newborns are able to discriminate between possible
sounds of human languages, even when these sounds do not belong to their native
language. Around the first year, as children begin to acquire the sound system of their native
tongue, the ability to distinguish between foreign sounds is mostly lost. Children can
understand certain words as early as 9 months and they start producing words around their
first birthday. First words (mostly, names for objects or individuals) are used in isolation and
later in simple (two-word) sentences. Around the age of 2 or 3, grammatical markers (such
as –ed, -ing in English) appear and the rate and diversity of vocabulary grows (with verbs,
adjectives and other terms being added). Between the ages of 3 and 5, children’s sentences
increase in length and complexity; a typical 5-year-old already knows about 10 to 15
thousand words.
The acquisition of language seems to take place fast, effortlessly, accurately and mostly
without explicit instruction from adults. Children can acquire language even in societies
where no language is directed to infants before infants themselves speak. How is this feat
accomplished? Part of the answer lies with the fact that language learning requires the
discovery of regularities on multiple levels of the linguistic input, and there is evidence that
even very young infants are sensitive to linguistic patterns (such as recurring sound
sequences). However, most theorists agree that children’s search for patterns in the
linguistic input has to be guided by strong biological influences. There are several arguments
supporting this position. First, there are universal properties of language that surface in all of
the languages that have so far been studied. Second, as described above, language
acquisition seems to proceed universally through the same stages for all learners despite
differences in culture, socio-economic level, parenting style, motivation and other factors in
the learners’ environment. Additionally, linguistic abilities seem to be specialized and often
dissociate from general-cognitive abilities in pathology. For instance, children with Williams
syndrome, whose IQs are below normal, have intact language-learning capacities; children
with Specific Language Impairment are characterized by deficits in the time-line and nature
of their language learning, even though their IQs are within the normal range.
Some of the most compelling evidence for humans’ biological preparedness for language
comes from the resilience of language learning in the face of absent or degraded input.
Children who are not exposed to conventional language (e.g., deaf children growing up
among hearing adults without access to sign language) have been known to invent
spontaneous gesture systems in order to communicate. Crucially, the properties of these
systems seem to be similar to those of early speech (children begin with simple gestures,
then produce two-gesture combinations and later more complex ‘sentences’). Similarly,
children exposed to improvised and irregular language-like systems (pidgins) constructed by
members of different language communities who find themselves living in the same
environment transform these imperfect systems, as they learn them, into rule-governed
languages (creoles). Finally, children whose parents are non-native speakers of either a
spoken or a signed language typically regularize the incorrect linguistic forms they are
exposed to. In all these cases, children go beyond the information they encounter in the input
– in a sense, they create, rather than simply acquire, language.
Further support for the conclusion that learners contribute substantially to language
growth comes from studying how changes in a learner’s mental preparedness impact
language acquisition. Evidence from abandoned and neglected children suggests that the
age at which these children were found had a profound effect on whether their language
ability could be restored. Experimental data from sign language show that late exposure to a
first (signed) language has deep negative effects on learning (similar effects of late exposure
hold for second language acquisition). Such ‘critical period’ effects demonstrate that the
maturational state of the learner’s brain is crucial for the attainment of a language system.
Several questions remain open about the mechanisms underlying language learning.
One issue is whether specific aspects of language acquisition should be attributed to
language-specific vs. general-purpose learning mechanisms. Another issue is whether
children’s native language can affect the way they think, and whether language is necessary
or helpful for the development of human concepts.

Anna Papafragou
See also: Aphasias; Cognition and perception; Reading; Top-down and bottom-up
processing; Word recognition

Suggested Further Readings


Baker, M. (2001). The atoms of language. Oxford: Oxford University Press.
Bloom, P. Ed. (1993), Language acquisition: Core readings. Cambridge, MA: MIT Press.
Chomsky, N. (1986). Knowledge of language: Its nature, origin, and use. New York: Praeger.
Clifton, C., Frazier, L. & Rayner, K. (1994), Eds. Perspectives in sentence processing.
Hillsdale, NJ: Erlbaum.
Gleitman, L., & M. Liberman (Eds.), An invitation to Cognitive Science, Vol. 1: Language.
Cambridge, MA: MIT Press.
Pinker, S. (1994). The language instinct. Penguin Books.
Trueswell, J., & Tanenhaus, M., Eds. (2006). Approaches to studying world-situated
language use. Cambridge, MA: MIT Press.

You might also like