Cochlear implants are highly successful neural prostheses for persons with severe or profound hea... more Cochlear implants are highly successful neural prostheses for persons with severe or profound hearing loss who gain little benefit from hearing aid amplification. Although implants are capable of providing important spectral and temporal cues for speech perception, performance on speech tests is variable across listeners. Psychophysical measures obtained from individual implant subjects can also be highly variable across implant channels. This review discusses evidence that such variability reflects deviations in the electrode—neuron interface, which refers to an implant channel’s ability to effectively stimulate the auditory nerve. It is proposed that focused electrical stimulation is ideally suited to assess channel-to-channel irregularities in the electrode—neuron interface. In implant listeners, it is demonstrated that channels with relatively high thresholds, as measured with the tripolar configuration, exhibit broader psychophysical tuning curves and smaller dynamic ranges tha...
This is a PDF file of an article that has undergone enhancements after acceptance, such as the ad... more This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Bimodal: CI + contra-lateral hearing aid ~30 % of all CI users use a hearing aid (world wide)-inc... more Bimodal: CI + contra-lateral hearing aid ~30 % of all CI users use a hearing aid (world wide)-increasing trend Goal: Improve effectiveness & efficiency of bimodal fitting by accounting for specific requirements Bimodal Formula (Optimized frequency response) Rationale: • (Effective) audibility of low frequency speech most important • Amplification in dead regions should be avoided Effective audibility (Ching et al. 2001): Level range above threshold ("effective audibility range") which contributes to speech intelligibility, decreases with increasing hearing loss Restoring audibility beyond effective audibility range will not improve speech intelligibility
Many studies have documented the benefits to speech understanding when cochlear implant (CI) pati... more Many studies have documented the benefits to speech understanding when cochlear implant (CI) patients can access low-frequency acoustic information from the ear opposite the implant. In this study we assessed the role of three factors in determining the magnitude of bimodal benefit - (i) the level of CI-only performance, (ii) the magnitude of the hearing loss in the ear with low-frequency acoustic hearing and (iii) the type of test material. The patients had low-frequency PTAs (average of 125, 250 and 500 Hz) varying over a large range (<30 dB HL to >70 dB HL) in the ear contralateral to the implant. The patients were tested with (i) CNC words presented in quiet (n = 105) (ii) AzBio sentences presented in quiet (n = 102), (iii) AzBio sentences in noise at +10 dB signal-to-noise ratio (SNR) (n = 69), and (iv) AzBio sentences at +5 dB SNR (n = 64). We find maximum bimodal benefit when (i) CI scores are less than 60 percent correct, (ii) hearing loss is less than 60 dB HL in low-...
The goal of this study was to determine whether there is a sensitive period during early developm... more The goal of this study was to determine whether there is a sensitive period during early development when a cochlear implantation can occur into a minimally degenerate and/or highly plastic central auditory system. Our measure of central auditory deprivation was latency of the P1 auditory evoked potential, whose generators include auditory thalamocortical areas. Auditory evoked potentials were recorded in 18 congenitally deaf children who were fitted with cochlear implants by 3.5 years of age. The P1 latencies of the children with implants were compared with the P1 latencies of their age-matched peers with normal hearing. There was no significant difference between the P1 latencies of the children with implants and the children with normal hearing. The present results suggest that early implantation occurs in a central auditory system that is minimally degenerate and/or highly plastic. Studies are ongoing to assess the consequences to the developing central auditory system of initia...
Objective-To determine whether patient-derived programming of one's cochlear implant (CI) stimula... more Objective-To determine whether patient-derived programming of one's cochlear implant (CI) stimulation levels may affect performance outcomes. Background-Increases in patient population, device complexity, outcome expectations, and clinician responsibility have demonstrated the necessity for improved clinical efficiency. Methods-Eighteen postlingually deafened adult CI recipients (mean=53 years; range, 24-83 years) participated in a repeated-measures, within-participant study designed to compare their baseline listening program to an experimental program they created. Results-No significant group differences in aided sound-field thresholds, monosyllabic word recognition, speech understanding in quiet, speech understanding in noise, nor spectral modulation detection (SMD) were observed (p>0.05). Four ears (17%) improved with the experimental program for speech presented at 45 dB SPL and two ears (9%) performed worse. Six ears (27.3%) improved significantly with the self-fit program at +10 dB signal-to-noise ratio (SNR) and four ears (26.6%) improved in speech understanding at +5 dB SNR. No individual scored significantly worse when speech was presented in quiet at 60 dB SPL or in any of the noise conditions tested. All but one participant opted to keep at least one of the self-fitting programs at the completion of this study. Participants viewed the process of creating their program more favorably (t=2.11, p=0.012) and thought creating the program was easier than the traditional fitting methodology (t=2.12, p=0.003). Average time to create the self-fit program was 10 minutes, 10 seconds (mean=9:22; range, 4:46-24:40). Conclusions-Allowing experienced adult CI recipients to set their own stimulation levels without clinical guidance is not detrimental to success.
Speech understanding was assessed in 15 subjects using the Tempo+ speech processor with minimum s... more Speech understanding was assessed in 15 subjects using the Tempo+ speech processor with minimum stimulation levels set to 0 μA, 10% of the most comfortable loudness level and equal to behavioural thresholds. No significant differences were found between the behavioural setting and any other setting for consonant identification, vowel identification or sentences presented either at conversational levels in background noise or at low input levels in quiet.
Both bilateral cochlear implants (CIs) and bimodal (electric plus contralateral acoustic) stimula... more Both bilateral cochlear implants (CIs) and bimodal (electric plus contralateral acoustic) stimulation can provide better speech intelligibility than a single CI. In both cases patients need to combine information from two ears into a single percept. In this paper we ask whether the physiological and psychological processes associated with aging alter the ability of bilateral and bimodal CI patients to combine information across two ears in the service of speech understanding. The subjects were 61 adult, bilateral CI patients and 94 adult, bimodal patients. The test battery was composed of monosyllabic words presented in quiet and the AzBio sentences presented in quiet, at +10 and at +5 dB signal-to-noise ratio (SNR). The subjects were tested in standard audiometric sound booths. Speech and noise were always presented from a single speaker directly in front of the listener. Age and bilateral or bimodal benefit were not significantly correlated for any test measure. Other factors bein...
The aims of this study were (i) to determine the magnitude of the interaural level differences (I... more The aims of this study were (i) to determine the magnitude of the interaural level differences (ILDs) that remain after cochlear implant (CI) signal processing and (ii) to relate the ILDs to the pattern of errors for sound source localization on the horizontal plane. The listeners were 16 bilateral CI patients fitted with MED-EL CIs and 34 normal-hearing listeners. The stimuli were wideband, high-pass, and low-pass noise signals. ILDs were calculated by passing signals, filtered by head-related transfer functions (HRTFs) to a Matlab simulation of MED-EL signal processing. For the wideband signal and high-pass signals, maximum ILDs of 15 to 17 dB in the input signal were reduced to 3 to 4 dB after CI signal processing. For the low-pass signal, ILDs were reduced to 1 to 2 dB. For wideband and high-pass signals, the largest ILDs for ±15 degree speaker locations were between 0.4 and 0.7 dB; for the ±30 degree speaker locations between 0.9 and 1.3 dB; for the 45 degree speaker locations ...
Journal of the American Academy of Audiology, 2012
In this article we review, and discuss the clinical implications of, five projects currently unde... more In this article we review, and discuss the clinical implications of, five projects currently underway in the Cochlear Implant Laboratory at Arizona State University. The projects are (1) norming the AzBio sentence test, (2) comparing the performance of bilateral and bimodal cochlear implant (CI) patients in realistic listening environments, (3) accounting for the benefit provided to bimodal patients by low-frequency acoustic stimulation, (4) assessing localization by bilateral hearing aid patients and the implications of that work for hearing preservation patients, and (5) studying heart rate variability as a possible measure for quantifying the stress of listening via an implant.The long-term goals of the laboratory are to improve the performance of patients fit with cochlear implants and to understand the mechanisms, physiological or electronic, that underlie changes in performance. We began our work with cochlear implant patients in the mid-1980s and received our first grant from...
The Journal of the Acoustical Society of America, 2002
ABSTRACT Objective: To assess the effects on speech understanding of frequency misalignments in c... more ABSTRACT Objective: To assess the effects on speech understanding of frequency misalignments in channel outputs for an acoustic model of a cochlear implant. Design: Consonants, vowels, and sentences were processed through a four‐channel sine‐wave simulation of a cochlear implant. In the five experimental conditions, channels 1 and 3 were always output at the correct frequency while channels 2 and 4 were output at frequencies varying from the correct frequency to frequencies 25%, 50%, and 75% lower than appropriate. In a fifth condition (2‐of‐4 condition), channels 2 and 4 were turned off. Results: Consonant recognition was reduced significantly with a 50% shift in channels 2 and 4. Vowel and sentence recognition were reduced significantly with a 75% shift in channels 2 and 4. For all material performance in the 75% shift condition was better than in the 2‐of‐4 condition. Conclusions: The perceiving system that underlies speech recognition is relatively tolerant of misplaced frequency information if other information is presented to the correct frequency location. This flexibility may account for some of the success of cochlear implants. [Research supported by a grant from NIDCD (No. RO1 00654‐12) to the second author.]
The aim of this study was to determine the minimum amount of low-frequency acoustic information t... more The aim of this study was to determine the minimum amount of low-frequency acoustic information that is required to achieve speech perception benefit in listeners with a cochlear implant in one ear and low-frequency hearing in the other ear. Design: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli presented to the nonimplanted ear were either low-pass-filtered at 125, 250, 500, or 750 Hz, or unfiltered (wideband). Results: Adding low-frequency acoustic information to electrically stimulated information led to a significant improvement in word recognition in quiet and sentence recognition in noise. Improvement was observed in the electric and acoustic stimulation condition even when the acoustic information was limited to the 125-Hz-low-passed signal. Further improvement for the sentences in noise was observed when the acoustic signal was increased to wideband. Conclusions: Information from the voice fundamental frequency (F0) region accounts for the majority of the speech perception benefit when acoustic stimulation is added to electric stimulation. We propose that, in quiet, low-frequency acoustic information leads to an improved representation of voicing, which in turn leads to a reduction in word candidates in the lexicon. In noise, the robust representation of voicing allows access to low-frequency acoustic landmarks that mark syllable structure and word boundaries. These landmarks can bootstrap word and sentence recognition.
Objectives-The authors describe the localization and speech-understanding abilities of a patient ... more Objectives-The authors describe the localization and speech-understanding abilities of a patient fit with bilateral cochlear implants (CIs) for whom acoustic low-frequency hearing was preserved in both cochleae. Design-Three signals were used in the localization experiments: low-pass, high-pass, and wideband noise. Speech understanding was assessed with the AzBio sentences presented in noise. Results-Localization accuracy was best in the aided, bilateral acoustic hearing condition, and was poorer in both the bilateral CI condition and when the bilateral CIs were used in addition to bilateral low-frequency hearing. Speech understanding was best when low-frequency acoustic hearing was combined with at least one CI. Conclusions-The authors found that (1) for sound source localization in patients with bilateral CIs and bilateral hearing preservation, interaural level difference cues may dominate interaural time difference cues and (2) hearing-preservation surgery can be of benefit to patients fit with bilateral CIs. Localization on the horizontal plane was assessed using low-pass (LP), high-pass (HP), and wide-band noise signals. The rationale for using these signals comes from the Duplex theory of sound source localization. A thorough review of this theory and the supporting data are
Cochlear implants are highly successful neural prostheses for persons with severe or profound hea... more Cochlear implants are highly successful neural prostheses for persons with severe or profound hearing loss who gain little benefit from hearing aid amplification. Although implants are capable of providing important spectral and temporal cues for speech perception, performance on speech tests is variable across listeners. Psychophysical measures obtained from individual implant subjects can also be highly variable across implant channels. This review discusses evidence that such variability reflects deviations in the electrode—neuron interface, which refers to an implant channel’s ability to effectively stimulate the auditory nerve. It is proposed that focused electrical stimulation is ideally suited to assess channel-to-channel irregularities in the electrode—neuron interface. In implant listeners, it is demonstrated that channels with relatively high thresholds, as measured with the tripolar configuration, exhibit broader psychophysical tuning curves and smaller dynamic ranges tha...
This is a PDF file of an article that has undergone enhancements after acceptance, such as the ad... more This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Bimodal: CI + contra-lateral hearing aid ~30 % of all CI users use a hearing aid (world wide)-inc... more Bimodal: CI + contra-lateral hearing aid ~30 % of all CI users use a hearing aid (world wide)-increasing trend Goal: Improve effectiveness & efficiency of bimodal fitting by accounting for specific requirements Bimodal Formula (Optimized frequency response) Rationale: • (Effective) audibility of low frequency speech most important • Amplification in dead regions should be avoided Effective audibility (Ching et al. 2001): Level range above threshold ("effective audibility range") which contributes to speech intelligibility, decreases with increasing hearing loss Restoring audibility beyond effective audibility range will not improve speech intelligibility
Many studies have documented the benefits to speech understanding when cochlear implant (CI) pati... more Many studies have documented the benefits to speech understanding when cochlear implant (CI) patients can access low-frequency acoustic information from the ear opposite the implant. In this study we assessed the role of three factors in determining the magnitude of bimodal benefit - (i) the level of CI-only performance, (ii) the magnitude of the hearing loss in the ear with low-frequency acoustic hearing and (iii) the type of test material. The patients had low-frequency PTAs (average of 125, 250 and 500 Hz) varying over a large range (<30 dB HL to >70 dB HL) in the ear contralateral to the implant. The patients were tested with (i) CNC words presented in quiet (n = 105) (ii) AzBio sentences presented in quiet (n = 102), (iii) AzBio sentences in noise at +10 dB signal-to-noise ratio (SNR) (n = 69), and (iv) AzBio sentences at +5 dB SNR (n = 64). We find maximum bimodal benefit when (i) CI scores are less than 60 percent correct, (ii) hearing loss is less than 60 dB HL in low-...
The goal of this study was to determine whether there is a sensitive period during early developm... more The goal of this study was to determine whether there is a sensitive period during early development when a cochlear implantation can occur into a minimally degenerate and/or highly plastic central auditory system. Our measure of central auditory deprivation was latency of the P1 auditory evoked potential, whose generators include auditory thalamocortical areas. Auditory evoked potentials were recorded in 18 congenitally deaf children who were fitted with cochlear implants by 3.5 years of age. The P1 latencies of the children with implants were compared with the P1 latencies of their age-matched peers with normal hearing. There was no significant difference between the P1 latencies of the children with implants and the children with normal hearing. The present results suggest that early implantation occurs in a central auditory system that is minimally degenerate and/or highly plastic. Studies are ongoing to assess the consequences to the developing central auditory system of initia...
Objective-To determine whether patient-derived programming of one's cochlear implant (CI) stimula... more Objective-To determine whether patient-derived programming of one's cochlear implant (CI) stimulation levels may affect performance outcomes. Background-Increases in patient population, device complexity, outcome expectations, and clinician responsibility have demonstrated the necessity for improved clinical efficiency. Methods-Eighteen postlingually deafened adult CI recipients (mean=53 years; range, 24-83 years) participated in a repeated-measures, within-participant study designed to compare their baseline listening program to an experimental program they created. Results-No significant group differences in aided sound-field thresholds, monosyllabic word recognition, speech understanding in quiet, speech understanding in noise, nor spectral modulation detection (SMD) were observed (p>0.05). Four ears (17%) improved with the experimental program for speech presented at 45 dB SPL and two ears (9%) performed worse. Six ears (27.3%) improved significantly with the self-fit program at +10 dB signal-to-noise ratio (SNR) and four ears (26.6%) improved in speech understanding at +5 dB SNR. No individual scored significantly worse when speech was presented in quiet at 60 dB SPL or in any of the noise conditions tested. All but one participant opted to keep at least one of the self-fitting programs at the completion of this study. Participants viewed the process of creating their program more favorably (t=2.11, p=0.012) and thought creating the program was easier than the traditional fitting methodology (t=2.12, p=0.003). Average time to create the self-fit program was 10 minutes, 10 seconds (mean=9:22; range, 4:46-24:40). Conclusions-Allowing experienced adult CI recipients to set their own stimulation levels without clinical guidance is not detrimental to success.
Speech understanding was assessed in 15 subjects using the Tempo+ speech processor with minimum s... more Speech understanding was assessed in 15 subjects using the Tempo+ speech processor with minimum stimulation levels set to 0 μA, 10% of the most comfortable loudness level and equal to behavioural thresholds. No significant differences were found between the behavioural setting and any other setting for consonant identification, vowel identification or sentences presented either at conversational levels in background noise or at low input levels in quiet.
Both bilateral cochlear implants (CIs) and bimodal (electric plus contralateral acoustic) stimula... more Both bilateral cochlear implants (CIs) and bimodal (electric plus contralateral acoustic) stimulation can provide better speech intelligibility than a single CI. In both cases patients need to combine information from two ears into a single percept. In this paper we ask whether the physiological and psychological processes associated with aging alter the ability of bilateral and bimodal CI patients to combine information across two ears in the service of speech understanding. The subjects were 61 adult, bilateral CI patients and 94 adult, bimodal patients. The test battery was composed of monosyllabic words presented in quiet and the AzBio sentences presented in quiet, at +10 and at +5 dB signal-to-noise ratio (SNR). The subjects were tested in standard audiometric sound booths. Speech and noise were always presented from a single speaker directly in front of the listener. Age and bilateral or bimodal benefit were not significantly correlated for any test measure. Other factors bein...
The aims of this study were (i) to determine the magnitude of the interaural level differences (I... more The aims of this study were (i) to determine the magnitude of the interaural level differences (ILDs) that remain after cochlear implant (CI) signal processing and (ii) to relate the ILDs to the pattern of errors for sound source localization on the horizontal plane. The listeners were 16 bilateral CI patients fitted with MED-EL CIs and 34 normal-hearing listeners. The stimuli were wideband, high-pass, and low-pass noise signals. ILDs were calculated by passing signals, filtered by head-related transfer functions (HRTFs) to a Matlab simulation of MED-EL signal processing. For the wideband signal and high-pass signals, maximum ILDs of 15 to 17 dB in the input signal were reduced to 3 to 4 dB after CI signal processing. For the low-pass signal, ILDs were reduced to 1 to 2 dB. For wideband and high-pass signals, the largest ILDs for ±15 degree speaker locations were between 0.4 and 0.7 dB; for the ±30 degree speaker locations between 0.9 and 1.3 dB; for the 45 degree speaker locations ...
Journal of the American Academy of Audiology, 2012
In this article we review, and discuss the clinical implications of, five projects currently unde... more In this article we review, and discuss the clinical implications of, five projects currently underway in the Cochlear Implant Laboratory at Arizona State University. The projects are (1) norming the AzBio sentence test, (2) comparing the performance of bilateral and bimodal cochlear implant (CI) patients in realistic listening environments, (3) accounting for the benefit provided to bimodal patients by low-frequency acoustic stimulation, (4) assessing localization by bilateral hearing aid patients and the implications of that work for hearing preservation patients, and (5) studying heart rate variability as a possible measure for quantifying the stress of listening via an implant.The long-term goals of the laboratory are to improve the performance of patients fit with cochlear implants and to understand the mechanisms, physiological or electronic, that underlie changes in performance. We began our work with cochlear implant patients in the mid-1980s and received our first grant from...
The Journal of the Acoustical Society of America, 2002
ABSTRACT Objective: To assess the effects on speech understanding of frequency misalignments in c... more ABSTRACT Objective: To assess the effects on speech understanding of frequency misalignments in channel outputs for an acoustic model of a cochlear implant. Design: Consonants, vowels, and sentences were processed through a four‐channel sine‐wave simulation of a cochlear implant. In the five experimental conditions, channels 1 and 3 were always output at the correct frequency while channels 2 and 4 were output at frequencies varying from the correct frequency to frequencies 25%, 50%, and 75% lower than appropriate. In a fifth condition (2‐of‐4 condition), channels 2 and 4 were turned off. Results: Consonant recognition was reduced significantly with a 50% shift in channels 2 and 4. Vowel and sentence recognition were reduced significantly with a 75% shift in channels 2 and 4. For all material performance in the 75% shift condition was better than in the 2‐of‐4 condition. Conclusions: The perceiving system that underlies speech recognition is relatively tolerant of misplaced frequency information if other information is presented to the correct frequency location. This flexibility may account for some of the success of cochlear implants. [Research supported by a grant from NIDCD (No. RO1 00654‐12) to the second author.]
The aim of this study was to determine the minimum amount of low-frequency acoustic information t... more The aim of this study was to determine the minimum amount of low-frequency acoustic information that is required to achieve speech perception benefit in listeners with a cochlear implant in one ear and low-frequency hearing in the other ear. Design: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli presented to the nonimplanted ear were either low-pass-filtered at 125, 250, 500, or 750 Hz, or unfiltered (wideband). Results: Adding low-frequency acoustic information to electrically stimulated information led to a significant improvement in word recognition in quiet and sentence recognition in noise. Improvement was observed in the electric and acoustic stimulation condition even when the acoustic information was limited to the 125-Hz-low-passed signal. Further improvement for the sentences in noise was observed when the acoustic signal was increased to wideband. Conclusions: Information from the voice fundamental frequency (F0) region accounts for the majority of the speech perception benefit when acoustic stimulation is added to electric stimulation. We propose that, in quiet, low-frequency acoustic information leads to an improved representation of voicing, which in turn leads to a reduction in word candidates in the lexicon. In noise, the robust representation of voicing allows access to low-frequency acoustic landmarks that mark syllable structure and word boundaries. These landmarks can bootstrap word and sentence recognition.
Objectives-The authors describe the localization and speech-understanding abilities of a patient ... more Objectives-The authors describe the localization and speech-understanding abilities of a patient fit with bilateral cochlear implants (CIs) for whom acoustic low-frequency hearing was preserved in both cochleae. Design-Three signals were used in the localization experiments: low-pass, high-pass, and wideband noise. Speech understanding was assessed with the AzBio sentences presented in noise. Results-Localization accuracy was best in the aided, bilateral acoustic hearing condition, and was poorer in both the bilateral CI condition and when the bilateral CIs were used in addition to bilateral low-frequency hearing. Speech understanding was best when low-frequency acoustic hearing was combined with at least one CI. Conclusions-The authors found that (1) for sound source localization in patients with bilateral CIs and bilateral hearing preservation, interaural level difference cues may dominate interaural time difference cues and (2) hearing-preservation surgery can be of benefit to patients fit with bilateral CIs. Localization on the horizontal plane was assessed using low-pass (LP), high-pass (HP), and wide-band noise signals. The rationale for using these signals comes from the Duplex theory of sound source localization. A thorough review of this theory and the supporting data are
Uploads
Papers by Tony Spahr