It is well established that emotion recognition of facial expressions declines with age, but evid... more It is well established that emotion recognition of facial expressions declines with age, but evidence for age-related differences in vocal emotions is more limited. This is especially true for nonverbal vocalizations such as laughter, sobs, or sighs. In this study, 43 younger adults (M = 22 years) and 43 older ones (M = 61.4 years) provided multiple emotion ratings of nonverbal emotional vocalizations. Contrasting with previous research, which often includes only one positive emotion (happiness) versus several negative ones, we examined 4 positive and 4 negative emotions: achievement/triumph, amusement, pleasure, relief, anger, disgust, fear, and sadness. We controlled for hearing loss and assessed general cognitive decline, cognitive control, verbal intelligence, working memory, current affect, emotion regulation, and personality. Older adults were less sensitive than younger ones to the intended vocal emotions, as indicated by decrements in ratings on the intended emotion scales and accuracy. These effects were similar for positive and negative emotions, and they were independent of age-related differences in cognitive, affective, and personality measures. Regression analyses revealed that younger and older participants' responses could be predicted from the acoustic properties of the temporal, intensity, fundamental frequency, and spectral profile of the vocalizations. The two groups were similarly efficient in using the acoustic cues, but there were differences in the patterns of emotion-specific predictors. This study suggests that ageing produces specific changes on the processing of nonverbal vocalizations. That decrements were not attenuated for positive emotions indicates that they cannot be explained by a positivity effect in older adults. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
This article presents a review of the effects of adverse conditions (ACs) on the perceptual, ling... more This article presents a review of the effects of adverse conditions (ACs) on the perceptual, linguistic, cognitive, and neurophysiological mechanisms underlying speech recognition. The review starts with a classification of ACs based on their origin: Degradation at the source (production of a noncanonical signal), degradation during signal transmission (interfering signal or medium-induced impoverishment of the target signal), receiver limitations (peripheral, linguistic, cognitive).
Abstract A left occipital stroke may result in alexia for two reasons, which may coexist dependin... more Abstract A left occipital stroke may result in alexia for two reasons, which may coexist depending on the distribution of the lesion. A lesion of the left lateroventral prestriate cortex or its afferents impairs word recognition (“pure” alexia). If the left primary visual cortex or its afferents are destroyed, resulting in a complete right homonymous hemianopia, rightward saccades during text reading are disrupted (“hemianopic” alexia). By using functional imaging, we showed two separate but interdependent systems involved in reading.
Abstract: Recent behavioural and neuropsychological studies have raised the possibility that the ... more Abstract: Recent behavioural and neuropsychological studies have raised the possibility that the phonological distinction between vowels and consonants is reflected in their independent status in language processing. We used PET scanning to investigate processing of vowels and consonants in the context of lexical access.
Speech is a complex acoustic stimulus, and there is much redundancy in the normal speech signal (... more Speech is a complex acoustic stimulus, and there is much redundancy in the normal speech signal (Shannon, 1995). This study attempts to investigate plasticity in the normal speech perception system, by contrasting the neural systems activated by noisevocoded speech, before and after training. We used a novel learning design, using sparse sampling fMRI. Subjects were presented with a single 6 channel vocoded sentence every 19 seconds, at the offset of which they made a rating of how well they had understood the sentence.
" Functional neuroimaging,. such as Positron Emission Tomography (PET) provides a sensitive measu... more " Functional neuroimaging,. such as Positron Emission Tomography (PET) provides a sensitive measure of neural activity. In this study PET was used to interrogate the cortical regions associated with increasing amounts of amplitude modulation by manipulating the number of channels in noise vocoded speech. Regional cerebral activity was measured while pre-trained subjects listened to noise vocoded speech. The number of channels was varied between 1 and 16 across different scans.
This study investigated the neural systems involved in speech perception, controlling for the com... more This study investigated the neural systems involved in speech perception, controlling for the complexity of the speech signal. Using speech stimuli of varying intelligibility but equivalent structural complexity, functional neuroimaging (PET) was used to investigate the brain activation involved in processing intelligible speech in eight right‐handed native English speaking volunteers. A stream of processing in the left temporal lobe was demonstrated, associated with the intelligibility of speech.
Functional neuroimaging affords unique insights into the neural activity of intact human brains. ... more Functional neuroimaging affords unique insights into the neural activity of intact human brains. In previous studies, presenting speech to subjects without an explicit task demand (so‐called ''passive''listening) produced strong activations in bilateral dorsolateral temporal cortex (DLTC) that is both extensive and symmetrical [Wise et al., Brain [bold 114], 1803–1817 (1991)]. Thus such results support the hypothesis that the DLTC of both hemispheres is involved in the acoustic and phonological processing of speech.
Functional imaging studies, both fMRI and PET, have allowed us to elaborate the neural systems im... more Functional imaging studies, both fMRI and PET, have allowed us to elaborate the neural systems important in speech perception. In particular, these techniques have allowed us to integrate models of human speech perception in the framework of primate auditory cortex, where different functional and anatomical streams of processing can be identified. This neuroanatomical approach also has the potential of relating plasticity in speech processing to the known plasticity of auditory cortex.
Functional imaging studies of speech perception have revealed extensive involvement of the dorsol... more Functional imaging studies of speech perception have revealed extensive involvement of the dorsolateral temporal lobes in aspects of speech and voice processing. In the current study, fMRI was used as a functional imaging method to address the perception of speech in different masking conditions. Throughout the scanning experiment, subjects were directed to listen to a female talker with whom they had been familiarized previously.
Abstract Production of actions is highly dependent on concurrent sensory information. In speech p... more Abstract Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech.
It is well established that emotion recognition of facial expressions declines with age, but evid... more It is well established that emotion recognition of facial expressions declines with age, but evidence for age-related differences in vocal emotions is more limited. This is especially true for nonverbal vocalizations such as laughter, sobs, or sighs. In this study, 43 younger adults (M = 22 years) and 43 older ones (M = 61.4 years) provided multiple emotion ratings of nonverbal emotional vocalizations. Contrasting with previous research, which often includes only one positive emotion (happiness) versus several negative ones, we examined 4 positive and 4 negative emotions: achievement/triumph, amusement, pleasure, relief, anger, disgust, fear, and sadness. We controlled for hearing loss and assessed general cognitive decline, cognitive control, verbal intelligence, working memory, current affect, emotion regulation, and personality. Older adults were less sensitive than younger ones to the intended vocal emotions, as indicated by decrements in ratings on the intended emotion scales and accuracy. These effects were similar for positive and negative emotions, and they were independent of age-related differences in cognitive, affective, and personality measures. Regression analyses revealed that younger and older participants' responses could be predicted from the acoustic properties of the temporal, intensity, fundamental frequency, and spectral profile of the vocalizations. The two groups were similarly efficient in using the acoustic cues, but there were differences in the patterns of emotion-specific predictors. This study suggests that ageing produces specific changes on the processing of nonverbal vocalizations. That decrements were not attenuated for positive emotions indicates that they cannot be explained by a positivity effect in older adults. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
This article presents a review of the effects of adverse conditions (ACs) on the perceptual, ling... more This article presents a review of the effects of adverse conditions (ACs) on the perceptual, linguistic, cognitive, and neurophysiological mechanisms underlying speech recognition. The review starts with a classification of ACs based on their origin: Degradation at the source (production of a noncanonical signal), degradation during signal transmission (interfering signal or medium-induced impoverishment of the target signal), receiver limitations (peripheral, linguistic, cognitive).
Abstract A left occipital stroke may result in alexia for two reasons, which may coexist dependin... more Abstract A left occipital stroke may result in alexia for two reasons, which may coexist depending on the distribution of the lesion. A lesion of the left lateroventral prestriate cortex or its afferents impairs word recognition (“pure” alexia). If the left primary visual cortex or its afferents are destroyed, resulting in a complete right homonymous hemianopia, rightward saccades during text reading are disrupted (“hemianopic” alexia). By using functional imaging, we showed two separate but interdependent systems involved in reading.
Abstract: Recent behavioural and neuropsychological studies have raised the possibility that the ... more Abstract: Recent behavioural and neuropsychological studies have raised the possibility that the phonological distinction between vowels and consonants is reflected in their independent status in language processing. We used PET scanning to investigate processing of vowels and consonants in the context of lexical access.
Speech is a complex acoustic stimulus, and there is much redundancy in the normal speech signal (... more Speech is a complex acoustic stimulus, and there is much redundancy in the normal speech signal (Shannon, 1995). This study attempts to investigate plasticity in the normal speech perception system, by contrasting the neural systems activated by noisevocoded speech, before and after training. We used a novel learning design, using sparse sampling fMRI. Subjects were presented with a single 6 channel vocoded sentence every 19 seconds, at the offset of which they made a rating of how well they had understood the sentence.
" Functional neuroimaging,. such as Positron Emission Tomography (PET) provides a sensitive measu... more " Functional neuroimaging,. such as Positron Emission Tomography (PET) provides a sensitive measure of neural activity. In this study PET was used to interrogate the cortical regions associated with increasing amounts of amplitude modulation by manipulating the number of channels in noise vocoded speech. Regional cerebral activity was measured while pre-trained subjects listened to noise vocoded speech. The number of channels was varied between 1 and 16 across different scans.
This study investigated the neural systems involved in speech perception, controlling for the com... more This study investigated the neural systems involved in speech perception, controlling for the complexity of the speech signal. Using speech stimuli of varying intelligibility but equivalent structural complexity, functional neuroimaging (PET) was used to investigate the brain activation involved in processing intelligible speech in eight right‐handed native English speaking volunteers. A stream of processing in the left temporal lobe was demonstrated, associated with the intelligibility of speech.
Functional neuroimaging affords unique insights into the neural activity of intact human brains. ... more Functional neuroimaging affords unique insights into the neural activity of intact human brains. In previous studies, presenting speech to subjects without an explicit task demand (so‐called ''passive''listening) produced strong activations in bilateral dorsolateral temporal cortex (DLTC) that is both extensive and symmetrical [Wise et al., Brain [bold 114], 1803–1817 (1991)]. Thus such results support the hypothesis that the DLTC of both hemispheres is involved in the acoustic and phonological processing of speech.
Functional imaging studies, both fMRI and PET, have allowed us to elaborate the neural systems im... more Functional imaging studies, both fMRI and PET, have allowed us to elaborate the neural systems important in speech perception. In particular, these techniques have allowed us to integrate models of human speech perception in the framework of primate auditory cortex, where different functional and anatomical streams of processing can be identified. This neuroanatomical approach also has the potential of relating plasticity in speech processing to the known plasticity of auditory cortex.
Functional imaging studies of speech perception have revealed extensive involvement of the dorsol... more Functional imaging studies of speech perception have revealed extensive involvement of the dorsolateral temporal lobes in aspects of speech and voice processing. In the current study, fMRI was used as a functional imaging method to address the perception of speech in different masking conditions. Throughout the scanning experiment, subjects were directed to listen to a female talker with whom they had been familiarized previously.
Abstract Production of actions is highly dependent on concurrent sensory information. In speech p... more Abstract Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech.
Uploads
Papers by Sophie Scott