Música y Lenguaje Comparación
Música y Lenguaje Comparación
Música y Lenguaje Comparación
The possible links between music and language continue to intrigue sci-
entists interested in the nature of these two types of knowledge, their
evolution, and their instantiation in the brain. Here we consider music
and language from a developmental perspective, focusing on the degree
to which similar mechanisms of learning and memory might subserve
the acquisition of knowledge in these two domains. In particular, it seems
possible that while adult musical and linguistic processes are modular-
ized to some extent as separate entities, there may be similar develop-
mental underpinnings in both domains, suggesting that modularity is
emergent rather than present at the beginning of life. Directions for fu-
ture research are considered.
289
290 Erin McMullen & Jenny R. Saffran
experience with the two domains as distinct from one another. Although
discussions of modularity are often confounded with assumptions about
innateness, Karmiloff-Smith and others have pointed out that modularity
and innateness of specific mental capacities are not inextricably linked
(Elman et al., 1996; Karmiloff-Smith, 1992). Furthermore, as Elman and
colleagues (1996) argue, the sheer quantity of genetic material that would
be required to directly encode specific modules for higher cognition is quite
unwieldy. Finally, although a specific module for language perception and
comprehension would have plausible sexual selection advantages, the di-
rect benefits of a music module are less obvious (but see Huron, 2003;
Trehub, 2003). The modularity question remains an open and vexing is-
sue, and significant further research is required to clarify it; some potential
avenues for such research may be suggested by the comparisons of the
ontogeny of music and language drawn here.
What Do We Learn
materials, the fact that they do show categorical perception for nonspeech
analogs of particular consonant contrasts created from tones suggests that
it is likely that they, too, would show categorical perception for at least
some musically relevant auditory input (Jusczyk, Rosner, Reed, & Kennedy,
1989).
The auditory system helps to determine which categories are to play a
role in each domain. Nonhuman species that presumably did not evolve to
perceive speech appear to detect many of the same phonetic categories as
humans (e.g., Kuhl & Miller, 1975; Kuhl & Padden, 1982). Moreover,
these speech categories are perceived by infants whose native languages do
not use them (for review, see Aslin, Jusczyk, & Pisoni, 1998). Thus, in the
absence of experience, infants chunk auditory space into the speech sound
repertoire from which human languages sample; for example, young Japa-
nese infants treat /r/ and /l/ as members of two distinct categories, unlike
Japanese adults. Similarly, infants show preferences for particular types of
musical sounds. In particular, infants prefer consonant intervals over disso-
nant intervals from quite early in life (Trainor & Heinmiller, 1999; Trainor,
Tsang, & Cheung, 2002; Zentner & Kagan, 1998). Some of these musical
predispositions can also be demonstrated in nonhuman animals. For ex-
ample, rhesus monkeys demonstrate octave equivalence only given tonal
materials; atonal materials do not induce this perceptual capacity, suggest-
ing auditory predispositions for tonality processing in nonhuman primates
(Wright, Rivera, Hulse, Shyan, & Neiworth, 2000). Similarly, neurophysi-
ological recordings in rhesus monkeys and humans suggest similar neural
signatures for consonant versus dissonant materials (Fishman, 2001).
If linguistic and musical systems never varied across cultures, then these
predispositions might be sufficient to explain how such systems are pro-
cessed. However, the languages and musical systems of the world exhibit
substantial differences. Thus, infants and young children must learn the
specific features of the systems in their environments. By 6 months of age,
infants’ speech perception abilities are attuned to the vowels in their native
language (Kuhl, Williams, Lacerda, Stevens, & Lindblom, 1992), suggest-
ing that just a few months of passive exposure to ambient speech is suffi-
cient to shape infants’ vowel processing; extensive experience with speech
production is not required. Similarly, infants’ consonant perception is at-
tuned to the native language by 10 to 12 months of age (Werker & Lalonde,
1988). In both cases, infants have shifted from categorizing all speech sounds,
regardless of their status in the native language, to discriminating contrasts
between native-language categories differently than nonnative language
categories. This nonnative to native shift implicates powerful learning abili-
ties that operate implicitly during the first year of life. Somehow, infants
are able to learn which sounds mark meaningful differences in their lan-
guage before they have extensive access to word meanings.
Learning Music and Language 293
Moving beyond the segmental level (phonemes and tones), it is clear that
the suprasegmental cues in both language and music are highly salient to
infants. These patterns of rhythm, stress, intonation, phrasing, and con-
tour most likely drive much of the early processing in both domains. Such
prosodic information is the first human-produced external sound source
available in utero; the filtering properties of the fluid-filled reproductive
system leave rhythmic cues intact relative to high-frequency information.
Fetuses avail themselves of the incoming rhythmic patterns; again, this is a
process of implicit, nonreinforced learning. For example, newborn infants
prefer their mother’s voice on the basis of prenatal learning (DeCasper &
Fifer, 1980). Fetal learning also encompasses the rhythmic patterns of the
mother’s native language, allowing newborn infants to use this experience
to differentiate between languages (Mehler et al., 1988). It is likely that
infants acquire specific information about musical rhythmic information
in their prenatal environments as well, assuming sufficient exposure and
quality of auditory input (maternal singing is presumably the best source
of such input).
After birth, infants continue to be attuned to prosodic information in
both domains. This may be in part due to learning in the womb. It is also
likely a function of the richness of the prosodic structure in the infants’
environments. Both linguistic and musical input are modified by caregivers
in ways that appear to be maximally attractive to infants. Infant-directed
speech, in comparison to adult-directed speech, is characterized cross-lin-
guistically by a slower rate of speech, higher fundamental frequency, greater
range of pitch variation, longer pauses, and characteristic repetitive into-
nation contours (Fernald, 1992). Other modifications also might enhance
early learning; for example, vowels in infant-directed speech are produced
in a more extreme manner, resulting in heightened distinctiveness between
vowel categories (Kuhl, Andruski, Chistovich, & Chistovich, 1997). In-
fant-directed speech captures infants’ attention more readily than adult-
directed speech (Cooper & Aslin, 1990; Fernald, 1985). Moreover, learn-
ing appears to be facilitated by the exaggerated prosodic contours of
infant-directed speech; infants detect word boundaries in fluent speech more
readily when the same items are spoken with infant-directed prosody as
opposed to adult-directed prosody (Thiessen, Hill, & Saffran, 2004).
Caregivers also engage musically with their infants in ways that differ
from adult-directed music. The play-songs and lullabies that dominate in-
Learning Music and Language 295
who had never heard any of these pieces. As expected, the control group
showed no preference for excerpts from the familiar versus the novel sona-
tas. However, the experimental group evidenced effects of their previous
exposure to these pieces, showing a significant difference in listening pref-
erences to the familiar versus the novel sonatas. Subsequent experiments
demonstrated that the infants were not merely remembering random snip-
pets of the music, but instead had represented aspects of the overall struc-
ture of the piece, with expectations regarding the placement of particular
passages (Saffran et al., 2000). These results suggest that infants’ musical
memory may be as nuanced as their linguistic memory.
Other recent studies investigating infant long-term memory for music
similarly suggest that infants’ auditory representations are quite detailed.
For example, infants can represent more complex pieces of music, such as
Ravel piano compositions, in long-term memory (Ilari & Polka, 2002).
Moreover, the content of infants’ memories include some extremely spe-
cific aspects of musical performances. Ten-month-olds represent acoustic
patterns drawn from the specific performances with which they were pre-
viously familiarized (Palmer, Jungers, & Jusczyk, 2001). Six-month-old
infants remember the specific tempo and timbre of music with which they
are familiarized, failing to recognize pieces when they are played at new
tempos or with new timbres, although recognition is maintained when pieces
are transposed to a new key (Trainor et al., 2002). It thus appears that
infant representations are extremely specific, not affording the opportunity
to generalize to include changes in tempo or timbre. This level of specificity
must change with age, as either a function of experience or development,
else listeners would not be able to recognize familiar music played on dif-
ferent instruments or at different rates. It should be noted, however, that
the ability to remember specific performance characteristics like key, tempo,
and timbre is not lost completely during development (Levitin, 1994, 1996;
Palmer et al., 2001; Schellenberg, Iverson, & McKinnon, 1999; Schellenberg
& Trehub, 2003). The ability to encode music abstractly complements the
capacity to engage in absolute encoding.
Similar shifts in specificity obtain for linguistic materials. For example,
7.5-month-old infants include talker-specific cues in their representations
of spoken words; they have difficulty recognizing words when they are
spoken in new voices, whereas 10.5-month-olds do not (Houston & Jusczyk,
2000). However, even younger infants are able to ignore talker-specific
properties under other circumstances—in particular, infants readily exhibit
vowel normalization, categorizing individual exemplars according to vowel
identity despite differences in speaker sex (Kuhl, 1979, 1983). Infants thus
appear to process linguistic auditory events at multiple levels of detail si-
multaneously. We see similar abilities to track multiple levels of informa-
tion in the domain of pitch perception; in some tasks, infants appear to
track absolute pitches, with no evidence of relative pitch representations
302 Erin McMullen & Jenny R. Saffran
(Saffran & Griepentrog, 2001), whereas slight task manipulations lead in-
fants to focus on relative pitch representations (Saffran, Reeck, Niehbur, &
Wilson, 2004). These findings are reminiscent of results with an avian spe-
cies—starlings—who can switch from relying on absolute pitch cues to
using relative pitch cues when necessitated by the structure of the task
(MacDougall-Shackleton & Hulse, 1996).
More insight into the development of auditory memory is being pro-
vided by recent work using EEG with young infants. Important compo-
nents of adult ERP responses are seen even shortly after birth (Kushnerenko,
2003), including the mismatch negativity (MMN), a preattentive measure
of auditory change detection that is detected when a sequence of repetitive
standard stimuli is interrupted by an infrequent one that deviates from the
standard on a particular criterion of interest. The apparently short dura-
tion of the memory traces leading to the MMN have made infant research
somewhat more difficult than studies using this method with adults (Cheour,
Ceponiene, et al., 2002); however, some interesting results have nonethe-
less been obtained. Cheour et al. (1998) have demonstrated that between
the ages of 6 months and 1 year, infants’ processing of phonemic contrasts
changes, consistent with prior behavioral data. In their study, they pre-
sented infants with one standard Finnish vowel, one deviant Finnish vowel,
and one deviant Estonian vowel. They found that at 6 months, infants’
EEG traces display a tendency to respond more strongly when an infre-
quent stimulus is more acoustically distinct—in this case, the Estonian
vowel—whereas by 1 year, they exhibit larger MMNs to the less-distinct
but phonemically different Finnish vowel (Cheour et al., 1998). Learning
such contrasts is possible even in the youngest infants. Less than a week
after birth, Finnish infants exposed to the vowel contrast /y/ versus /y/i/
while sleeping showed an MMN-like response to the less frequent sound,
whereas those with no exposure or unrelated exposure showed no such
response—indicating that newborn auditory memory is sufficient to per-
mit the learning of new phonetic contrasts without any possibility of con-
scious attention (Cheour, Martynova, et al., 2002). To our knowledge, no
comparable infant MMN research has been done involving musical stimuli.
Given that the requisite MMN to pitch change is observable in young lis-
teners, this is a fertile field for further investigation of infant memory.
nisms: rules and statistics. Rules require learners to abstract away from the
specific items in their experience to discover underlying structure. The clas-
sic formulation of this process comes from Chomsky (1959), who noted
that while no listener had ever heard the sentence “Colorless green ideas
sleep furiously,” that sentence was nevertheless grammatical (compare it
with the ungrammatical “Furiously green sleep ideas colorless”). Similar
ideas have been advanced to explain certain aspects of music cognition,
including expectancies and decisions about well-formedness and grouping
(Lerdahl & Jackendoff, 1983; Narmour, 2000). An experimental demon-
stration of this type of process is the study by Marcus et al. (1999) men-
tioned earlier. In this study, 7-month-old infants were exposed to sentences
like “wo fe fe,” “ti la la,” and “bi ta ta.” They were then tested on novel
sentences that followed the ABB rule, such as “ko ga ga,” versus novel
sentences that violated the ABA rule, such as “ko ga ko.” The hallmark of
rule-based learning is to have abstracted away from the particular elements
in the input to recognize “grammatical” sequences that have not been heard
before; the infants in Marcus’ study achieved this after just a few minutes
of exposure.
Another learning mechanism that has received growing attention is sta-
tistical learning: detecting patterns of sounds, words, or other units in the
environment that cue underlying structure (for a recent review, see Saffran,
2003a). The environment contains massive amounts of statistical informa-
tion that is roughly correlated with various levels of structure. For example,
the probabilities with which syllables follow one another serve as cues to
word boundaries; syllable sequences that recur consistently are more likely
to be words than sequences that do not (compare the likelihood that “pre”
is followed by “ty” to the likelihood that “ty” is followed by “bay”, as in
the sequence “pretty baby”). These statistics are readily captured by young
infants; 8-month-olds can discover word boundaries in fluent speech, after
just 2 minutes of exposure, based solely on the statistical properties of
syllable sequences (e.g., Aslin, Saffran, & Newport, 1992; Saffran, Aslin,
& Newport, 1996).
Similar statistical learning abilities appear to be used for sequences of
musical tones. For example, infants can discover boundaries between “tone
words” by tracking the probabilities with which particular notes occur
(Saffran & Griepentrog, 2001; Saffran, Johnson, Aslin, & Newport, 1999;
Saffran, 2003b). Even complex aspects of a musical system, such as the
tonal hierarchy of traditional Western music, are reflected in the statistics
of the input (Budge, 1943), implying that they may be available for statis-
tical learning by humans or even by neural networks (Bharucha, 1991).
These results suggest that at least some aspects of music and language may
be acquired via the same learning mechanism. In some ways, this is not
surprising given other facts about music and language. For example, pitch
304 Erin McMullen & Jenny R. Saffran
plays a critical role in many of the world’s languages; these “tone languages”
(such as Mandarin, Thai, and Vietnamese) use pitch contrastively, such
that the same syllable, spoken with a different pitch or pitch contour, has
an entirely different meaning. This use of pitch is upheld by adult users of
tone languages, who are vastly more likely to maintain highly specific pitch
representations for words than are their counterparts who speak nontone
languages such as English (Deutsch, 2002).
Conclusions
volved in the perception of both speech and music. However, the vast stores
of knowledge pertaining to these separate domains may be stored in sepa-
rate places in the brain. Patel argues that when neurological patients present
with what appear to be domain-specific deficits in speech or music, what
has actually been lost is not the processing capacity, but the knowledge
required to engage it as a mature comprehender or producer. On this hy-
pothesis, basic similarities between infant speech and music learning mecha-
nisms would be expected. To test this hypothesis, Patel suggests a more
careful examination of neuropsychological patients who present with dis-
orders apparently specific to one domain.
Another way of looking at this controversy is that a distinction must be
made between the putative modularity of mechanisms used to learn in dif-
ferent domains and the evidence for modularity of functions in the mature
learner. Although the data supporting separate cortical regions subserving
some aspects of musical and linguistic processing in adults are overwhelm-
ing, it is still quite plausible that young children may bring some of the
same skills to bear on learning in each domain. The brains of young chil-
dren are quite plastic and show a remarkable ability to reorganize in the
event of head trauma, which suggests that, whatever the arguments for
functional localization in adults, it is not fixed in children. Furthermore,
differences between the brains of musicians and nonmusicians have already
been demonstrated (e.g., Schlaug, 2003), and it is tempting to conclude
from this that experience has a profound effect on cortical organization.
However, this hypothesis requires further testing, perhaps through a sys-
tematic investigation of less experienced brains. To date, relatively few im-
aging studies have been done with young children, in part because of the
risks associated with PET. Luckily, with the advent of less invasive tech-
niques like fMRI, it has become possible to see whether imaging results
showing modularity in adults can be replicated in children. Efforts in this
direction have been aided by a recent study by Kang and colleagues, which
showed that standard methodological procedures for handling adult fMRI
data, such as standardizing it to a common stereotactic space, are adequate
for working with child imaging data (Kang, Burgund, Lugar, Petersen, &
Schlaggar, 2003). Early results suggest that some language-related func-
tions do show age-related organizational differences that are unrelated to
performance level (Schlaggar et al., 2002). However, more detailed research
must be done using auditory musical and linguistic stimuli in order to bet-
ter understand the modularity issue as it pertains to music and language.
The theoretical issue of modularity aside, it is also the case that meta-
phor plays a powerful role in directing our thinking and suggesting new
insights. Whether or not music and language share common ancestry or
circuitry, thinking about them as related functions may still be quite help-
ful in generating novel hypotheses that can help us to better understand
306 Erin McMullen & Jenny R. Saffran
them as separate domains. It is our hope that our review of the relevant
linguistic and musical issues will help to inspire productive developmental
research toward this end.1
References
Aslin, R., Jusczyk, P., & Pisoni, D. B. (1998). Speech and auditory processing during in-
fancy: Constraints on and precursors to language. In D. Kuhn & R. Siegler (Eds.), Hand-
book of Child Psychology (5th ed., Vol. 2, pp. 147–198). New York: Wiley.
Aslin, R., Saffran, J., & Newport, E. (1992). Computation of conditional probability statis-
tics by 8-month-old infants. Psychological Science, 9, 321–324.
Balkwill, L.-L., & Thompson, W. F. (1999). A cross-cultural investigation of the perception
of emotion in music: Psychophysical and cultural cues. Music Perception, 17, 43–64.
Belin, P., Zatorre, R. J., & Ahad, P. (2002). Human temporal-lobe response to vocal sounds.
Cognitive Brain Research, 13, 17–26.
Bergeson, T., & Trehub, S. (2002). Absolute pitch and tempo in mothers’ songs to infants.
Psychological Science, 13, 72–75.
Bharucha, J. (1991). Pitch, harmony, and neural nets: A psychological perspective. In P.
Todd & D. G. Loy (Eds.), Music and connectionism (pp. 84–99). Cambridge, MA: MIT
Press.
Blood, A., & Zatorre, R. J. (2001). Intensely pleasurable responses to music correlate with
activity in brain regions implicated in reward and emotion. Proceedings of the National
Academy of Sciences, 98, 11818–11823.
Budge, H. (1943). A study of chord frequencies: Based on the music of representative com-
posers of the eighteenth and nineteenth centuries. New York: Teachers’ College, Colum-
bia University.
Cheour, M., Ceponiene, R., Lehtokoski, A., Luuk, A., Allik, J., Alho, K., & Näätänen, R.
(1998). Development of language-specific phoneme representations in the infant brain.
Nature Neuroscience, 1, 351–353.
Cheour, M., Ceponiene, R., Leppänen, P., Alho, K., Kujala, T., Renlund, M., Fellman, V., &
Näätänen, R. (2002). The auditory sensory memory trace decays rapidly in newborns.
Scandinavian Journal of Psychology, 43, 33–39.
Cheour, M., Martynova, O., Näätänen, R., Erkkola, R., Sillanpää, M., Kero, P., Raz, A.,
Kaipio, M.-L., Hiltunen, J., Aaltonen, O., Savela, J., & Hämäläinen. (2002). Speech
sounds learned by sleeping newborns. Nature, 415, 599–600.
Chomsky, N. (1957). Syntactic structures. The Hague: Mouton.
Chomsky, N. (1959). A review of B. F. Skinner’s Verbal Behavior. Language, 35, 26–58.
Chomsky, N. (1981). Lectures on government and binding. Dordrecht: Foris.
Cooper, R., & Aslin, R. (1990). Preference for infant-directed speech in the first month
after birth. Child Development, 61, 1584–1595.
Crowder, R. (1985). Perception of the major/minor distinction: III. Hedonic, musical, and
affective discriminations. Bulletin of the Psychonomic Society, 23, 314–316.
Crowder, R., Reznick, J. S., & Rosenkrantz, S. (1991). Perception of the major/minor dis-
tinction: V. Preferences among infants. Bulletin of the Psychonomic Society, 29, 187–
188.
Cutting, J., & Rosner, B. (1974). Categories and boundaries in speech and music. Percep-
tion and Psychophysics, 16, 564–570.
DeCasper, A., & Fifer, W. (1980). Of human bonding: Newborns prefer their mothers’
voices. Science, 208, 1174–1176.
Deutsch, D. (2002). The puzzle of absolute pitch. Current Directions in Psychological Sci-
ence, 11, 200–204.
Eimas, P., Siqueland, E., Jusczyk, P., & Vigorito, J. (1971). Speech perception in infants.
Science, 171, 303–306.
Elman, J., Bates, E., Johnson, E., Karmiloff-Smith, A., Parisi, D., & Plunkett, K. (1996).
Rethinking innateness: A connectionist perspective on development (Vol. 10). Cambridge,
MA: MIT Press.
Fernald, A. (1985). Four-month-old infants prefer to listen to motherese. Infant Behavior
and Development, 8, 181–195.
Fernald, A. (1992). Human maternal vocalizations to infants as biologically relevant sig-
nals: An evolutionary perspective. In J. H. Barkow & L. Cosmides (Eds.), The adapted
mind: Evolutionary psychology and the generation of culture (pp. 391–428). London:
Oxford University Press.
Fishman, Y. (2001). Consonance and dissonance of musical chords: Neural correlates in
auditory cortex of monkeys and humans. Journal of Neurophysiology, 86, 2761–2788.
Fodor, J. (1983). Modularity of mind. Cambridge, MA: MIT Press.
Gomez, R., & Gerken, L. (1999). Artificial grammar learning by 1-year-olds leads to spe-
cific and abstract knowledge. Cognition, 70, 109–135.
Hahne, A., & Friederici, A. D. (1999). Electrophysiological evidence for two steps in syn-
tactic analysis: Early automatic and late controlled processes. Journal of Cognitive Neu-
roscience, 11, 194–205.
Helmholtz, H. L. F. von (1895). On the sensations of tone as a physiological basis for the
theory of music (A. J. Ellis, Trans.) (3rd ed.). London: Longmans, Green, and Co.
Hirsh-Pasek, K., & Golinkoff, R. (1996). The origins of grammar: Evidence from early
language comprehension. Cambridge, MA: MIT Press.
Hirsh-Pasek, K., Kemler Nelson, D., Jusczyk, P., & Cassidy, K. (1987). Clauses are percep-
tual units for young infants. Cognition, 26, 269–286.
Houston, D., & Jusczyk, P. (2000). The role of talker-specific information in word segmen-
tation by infants. Journal of Experimental Psychology: Human Perception and Perfor-
mance, 26, 1570–1582.
Huron, D. (2003). Is music an evolutionary adaptation? In I. Peretz & R. J. Zatorre (Eds.),
The cognitive neuroscience of music (pp. 57–75). Oxford: Oxford University Press.
Ilari, B., & Polka, L. (2002). Memory for music in infancy: The role of style and complexity.
Paper presented at the International Conference on Infant Studies, Toronto.
Jusczyk, P. (1997). The discovery of spoken language. Cambridge, MA: MIT Press.
Jusczyk, P., & Hohne, E. (1997). Infants’ memory for spoken words. Science, 277, 1984–
1986.
Jusczyk, P., Rosner, B., Cutting, J., Foard, C. F., & Smith, L. B. (1977). Categorical percep-
tion of non-speech sounds by two-month-old infants. Perception and Psychophysics, 21,
50–54.
Jusczyk, P., Rosner, B., Reed, M., & Kennedy, L. (1989). Could temporal order differences
underlie 2-month-olds’ discrimination of English voicing contrasts? Journal of the Acous-
tical Society of America, 85, 1741–1749.
Jusczyk, P. W., & Krumhansl, C. L. (1993). Pitch and rhythmic patterns affecting infants’
sensitivity to musical phrase structure. Journal of Experimental Psychology: Human
Perception and Performance, 19, 627–640.
Juslin, P., & Laukka, P. (2003). Communication of emotions in vocal expression and music
performance: Different channels, same code? Psychological Bulletin, 129, 770–814.
Kang, H. C., Burgund, E. D., Lugar, H., Petersen, S., & Schlaggar, B. (2003). Comparison
of functional activation foci in children and adults using a common stereotactic space.
NeuroImage, 19, 16–28.
Karmiloff-Smith, A. (1992). Beyond modularity: A developmental perspective on cognitive
science. Cambridge, MA: MIT Press.
308 Erin McMullen & Jenny R. Saffran
Kastner, M., & Crowder, R. (1990). Perception of the major/minor distinction: IV. Emo-
tional connotations in young children. Music Perception, 8, 189–202.
Kemler Nelson, D., Jusczyk, P., Hirsh-Pasek, K., & Cassidy, K. (1989). How the prosodic
cues in motherese might assist language learning. Journal of Child Language, 16, 55–68.
Kluender, K., Lotto, A., Holt, L., & Bloedel, S. (1998). Role of experience for language-
specific functional mappings of vowel sounds. Journal of the Acoustical Society of
America, 104, 3568–3582.
Koelsch, S., Gunter, T. C., Friederici, A. D., & Schröger, E. (2000). Brain indices of music
processing: “Nonmusicians” are musical. Journal of Cognitive Neuroscience, 12, 520–
541.
Krumhansl, C. L. (1990). Cognitive foundations of musical pitch (Vol. 17). New York:
Oxford University Press.
Krumhansl, C. L., & Jusczyk, P. W. (1990). Infants’ perception of phrase structure in music.
Psychological Science, 1, 70–73.
Kuhl, P. (1979). Speech perception in early infancy: Perceptual constancy for spectrally
dissimilar vowel categories. Journal of the Acoustical Society of America, 66, 1668–
1679.
Kuhl, P. (1983). Perception of auditory equivalence classes for speech in early infancy. In-
fant Behavior and Development, 6, 263–285.
Kuhl, P., Andruski, J., Chistovich, I., & Chistovich, L. (1997). Cross-language analysis of
phonetic units in language addressed to infants. Science, 277, 684–686.
Kuhl, P., & Miller, J. (1975). Speech perception by the chinchilla: Voiced-voiceless distinc-
tion in alveolar plosive consonants. Science, 190, 69–72.
Kuhl, P., & Padden, D. (1982). Enhanced determinability at the phonetic boundaries for the
voicing feature in macacques. Perception and Psychophysics, 32, 542–550.
Kuhl, P., Williams, K., Lacerda, F., Stevens, K., & Lindblom, B. (1992). Linguistic experi-
ence alters phonetic perception in infants by 6 months of age. Science, 255, 606–608.
Kushnerenko, E. (2003). Maturation of the cortical auditory event-related brain potentials
in infancy. Unpublished doctoral dissertation, University of Helsinki, Helsinki.
Lerdahl, F. (2003). The sounds of poetry viewed as music. In I. Peretz & R. J. Zatorre
(Eds.), The cognitive neuroscience of music. Oxford: Oxford University Press.
Lerdahl, F., & Jackendoff, R. (1983). A generative theory of tonal music. Cambridge, MA:
MIT Press.
Levitin, D. J. (1994). Absolute memory for musical pitch: Evidence from the production of
learned melodies. Perception and Psychophysics, 56, 414–423.
Levitin, D. J. (1996). Memory for musical tempo: Additional evidence that auditory memory
is absolute. Perception and Psychophysics, 58, 927–935.
Liégeois-Chauvel, C., Giraud, K., Badier, J.-M., Marquis, P., & Chauvel, P. (2003). Intrac-
erebral evoked potentials in pitch perception reveal a functional asymmetry of human
auditory cortex. In I. Peretz & R. J. Zatorre (Eds.), The cognitive neuroscience of music
(pp. 3–20). Oxford: Oxford University Press.
MacDougall-Shackleton, S., & Hulse, S. (1996). Concurrent absolute and relative pitch
processing by European starlings (Sturnus vulgaris). Journal of Comparative Psychol-
ogy, 110, 139–146.
Maess, B., Koelsch, S., Gunter, T. C., & Friederici, A. D. (2001). Musical syntax is pro-
cessed in Broca’s area: An MEG study. Nature Neuroscience, 4, 540–545.
Marcus, G., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Rule learning by seven-
month-old infants. Science, 283, 77–80.
Masataka, N. (1999). Preferences for infant-directed singing in 2-day-old hearing infants of
deaf parents. Developmental Psychology, 35, 1001–1005.
Maye, J., Werker, J., & Gerken, L. (2002). Infant sensitivity to distributional information
can affect phonetic discrimination. Cognition, 82, B101–B111.
Mehler, J., Jusczyk, P., Lambertz, G., Halsted, N., Bertoncini, J., & Amiel-Tison, C. (1988).
A precursor of language acquisition in young infants. Cognition, 29, 143–178.
Meyer, L. (1956). Emotion and meaning in music. Chicago: University of Chicago Press.
Learning Music and Language 309
Smith, L. B., Kemler Nelson, D., Grohskopf, L. A., & Appleton, T. (1994). What child is
this? What interval was that? Familiar tunes and music perception in novice listeners.
Cognition, 52, 23–54.
Speer, J., & Meeks, P. (1988). School children’s perception of pitch in music.
Psychomusicology, 5, 49–56.
Tervaniemi, M. (2001). Musical sound processing in the human brain: Evidence from elec-
tric and magnetic recordings. In R. J. Zatorre & I. Peretz (Eds.), The biological founda-
tions of music (Vol. 930, pp. 259–272). New York, NY: New York Academy of Sciences.
Tervaniemi, M., Kujala, A., Alho, K., Virtanen, J., Ilmoniemi, R. J., & Näätänen, R. (1999).
Functional specialization of the human auditory cortex in processing phonetic and mu-
sical sounds: A magnetoencephalographic (MEG) study. NeuroImage, 9, 330–336.
Tervaniemi, M., Medvedev, S. V., Alho, K., Pakhomov, S. V., Roudas, M. S., van Zuijen, T.
L., & Näätänen, R. (2000). Lateralized automatic auditory processing of phonetic ver-
sus musical information: A PET study. Human Brain Mapping, 10, 74–79.
Tervaniemi, M., & van Zuijen, T. L. (1999). Methodologies of brain research in cognitive
musicology. Journal of New Music Research, 28, 200–208.
Thiessen, E., Hill, E., & Saffran, J. (2004). Infant-directed speech facilitates word segmen-
tation. Manuscript submitted for publication.
Trainor, L. (1996). Infant preferences for infant-directed versus noninfant-directed playsongs
and lullabies. Infant Behavior and Development, 19, 83–92.
Trainor, L., Austin, C., & Desjardins, R. (2000). Is infant-directed speech prosody a result
of the vocal expression of emotion? Psychological Science, 11, 188–195.
Trainor, L., & Heinmiller, B. (1999). The development of evaluative responses to music:
Infants prefer to listen to consonance over dissonance. Infant Behavior and Develop-
ment, 21, 77–88.
Trainor, L., & Trehub, S. (1992). A comparison of infants’ and adults’ sensitivity to West-
ern musical structure. Journal of Experimental Psychology: Human Perception and Per-
formance, 18, 394–402.
Trainor, L., & Trehub, S. (1994). Key membership and implied harmony in Western tonal
music: Developmental perspectives. Perception and Psychophysics, 56, 125–132.
Trainor, L., Wu, L., Tsang, C. D., & Plantinga, J. (2002). Long-term memory for music in
infancy. Paper presented at the International Conference on Infant Studies, Toronto.
Trainor, L., & Zacharias, C. (1998). Infants prefer higher-pitched singing. Infant Behavior
and Development, 21, 799–805.
Trainor, L. J., Tsang, C. D., & Cheung, V. H. W. (2002). Preference for sensory consonance
in 2- and 4-month-old infants. Music Perception, 20, 187–194.
Trehub, S. (2003). Musical predispositions in infancy: An update. In I. Peretz & R. J. Zatorre
(Eds.), The cognitive neuroscience of music (pp. 3–20). Oxford: Oxford University Press.
Trehub, S., Cohen, A., Thorpe, L., & Morrongiello, B. (1986). Development of the percep-
tion of musical relations: Semitone and diatonic structure. Journal of Experimental Psy-
chology: Human Perception and Performance, 12, 295–301.
Trehub, S., & Trainor, L. (1998). Singing to infants: Lullabies and playsongs. Advances in
Infancy Research, 12, 43–77.
Trehub, S., Unyk, A., & Trainor, L. (1993a). Adults identify infant-directed music across
cultures. Infant Behavior and Development, 16, 193–211.
Trehub, S., Unyk, A., & Trainor, L. (1993b). Maternal singing in cross-cultural perspective.
Infant Behavior and Development, 16, 285–295.
Werker, J., & Lalonde, C. (1988). Cross-language speech perception: Initial capabilities and
developmental change. Developmental Psychology, 24, 672–683.
Wilson, S., Wales, R., & Pattison, P. (1997). The representation of tonality and meter in
children aged 7 to 9. Journal of Experimental Child Psychology, 64, 42–66.
Wright, A., Rivera, J., Hulse, S., Shyan, M., & Neiworth, J. (2000). Music perception and
octave generalization in rhesus monkeys. Journal of Experimental Psychology: General,
129, 291–307.
Zatorre, R. J. (2003). Neural specializations for tonal processing. In I. Peretz & R. J. Zatorre
(Eds.), The cognitive neuroscience of music (pp. 231–246). Oxford: Oxford University
Press.
Learning Music and Language 311
Zatorre, R. J., & Belin, P. (2001). Spectral and temporal processing in human auditory
cortex. Cerebral Cortex, 11, 946–953.
Zatorre, R. J., Belin, P., & Penhune, V. (2002). Structure and function of auditory cortex:
music and speech. Trends in Cognitive Sciences, 6, 37–46.
Zentner, M., & Kagan, J. (1998). Infants’ perception of consonance and dissonance in
music. Infant Behavior and Development, 21, 483–492.