September 21, 2010 Audiology

Biological Markers of Reading and Speech-in-Noise Perception in the Auditory System

see also

Language-based learning impairments occur in 5%–10% of school-aged children (Shapiro, Church, & Lewis, 2007). Children with language-based learning impairments are more greatly affected by background noise than their typically developing peers (Bradlow et al., 2003; Ziegler et al., 2009) and may be at a particular disadvantage in noisy classrooms, which in one survey were as loud as a busy traffic intersection (Shield & Dockrell, 2003).

Researchers at the Auditory Neuroscience Lab at Northwestern University have found that children with language-based learning impairments can have atypical neural representations of speech in their auditory systems. Moreover, background noise compromises auditory encoding of sound to a greater extent in these children compared to their typically developing peers. These neural markers, described below, are associated with language and listening skills such as speech-in-noise perception and reading ability and reflect a biological basis for these abilities.

Because the nervous system communicates through electricity, the coordinated responses of neurons in the auditory brainstem to sounds can be recorded; these responses are called the auditory brainstem response (ABR). For decades ABR has been used as a clinical measure of auditory function, and it has been used more recently as an objective measure of auditory processing of complex sounds, such as speech and music (for a review, see Skoe & Kraus, 2010). Brainstem activity also is experience-dependent and is shaped by lifelong experience with music or language as well as by short-term training (Parbery-Clark et al., 2009; Krishnan et al., 2005; Russo et al., 2005). The brainstem response can reflect the acoustics of the sound played to the ear, including the pitch, timing, and harmonics of the sound with remarkable fidelity.

Children with language-based learning impairments have poorer brainstem encoding of the timing and harmonics of sound, which are important for the perception of certain consonants, but not the pitch of the sound (Banai et al., 2009). Stop consonants are particularly difficult to understand in the presence of background noise and they are particularly difficult to understand—even in quiet—for children who have language-based learning and listening disorders (Miller & Nicely, 1955; Tallal & Piercy, 1974, 1975).

Study Results

In our lab, two recent studies have linked the biological encoding of stop consonants with listening to speech in noise and reading skills. First, poor readers were found to have poorer brainstem representation of the neural timing necessary to distinguish stop consonants as compared with good readers (Hornickel et al., 2009). Children heard the syllables [ba], [da], and [ga], which were synthesized and differed in their second formant only. These sounds were randomly intermixed with five other speech sounds that differed on a number of acoustic parameters. ABR measures reflect the formant differences among [ba], [da], and [ga] through the timing of the responses (Johnson et al., 2008). The [ga] syllable has the highest starting formant frequency and elicits the fastest response; the [ba] syllable has the lowest starting formant frequency and elicits the slowest response. The [da] syllable is between the two in frequency and in response timing. 

When comparing the three responses, we expect to see the response to [ga] leading the response to [da], which in turn leads the response to [ba]. A comparison of good and poor readers on a score that reflects the presence and magnitude of the expected timing pattern shows that poor readers had significantly worse separation of the three responses than good readers; the brains of poor readers were unable to represent the sounds as being different. Importantly, the ability of the brainstem to represent the three sounds distinctly was correlated with the perception of speech in noise (see Figure 1 [PDF]). This study suggests that the brains of children who are poor readers have difficulty representing contrastive speech sounds as different, a difficulty that may affect both reading and speech-in-noise perception.

In the second study, we found that poor readers were unable to benefit from predictability in the presentation of speech sounds (Chandrasekaran et al., 2009). The ability to benefit from the statistical patterns and predictability of environmental sounds is thought to contribute to early language learning in infants when they begin parsing strings of speech into meaningful units (Saffran, Aslin, & Newport, 1996). Children with specific language impairment are poor at learning a pseudo-language through statistical patterns in the environment (Evans, Saffran, & Robe-Torres, 2009).

We found that the brains of poor readers are unable to benefit from the predictability of speech sounds and the context in which they occur. In the lab, children listened to a [da] syllable presented in two ways. In the predictable condition, [da] was presented alone, so there was 100% probability that the next sound also would be a [da]. In the variable condition, [da] was presented with seven other speech sounds each playing 12.5% of the time and unpredictably.  The brainstem responses to [da] in the two conditions were compared; in typically developing children responses to elements of the [da] thought to correspond to vocal pitch were enhanced when [da] was played in the predictable condition. 

Unlike the good readers, poor readers did not have enhanced responses when the [da] was played in the predictable condition. The degree of enhancement for the predictable condition was correlated with perception of speech in noise. It appears that the ability to "lock-on" or "tag" repeating sound characteristics, such as vocal pitch, is important for speech-in-noise perception because it helps listeners to track one speaker over time. Poor readers were impaired in their ability to "tag" the elements reflecting vocal pitch, which contributed to their difficulty in understanding speech in noise.

Both of these studies revealed that poor readers had impaired brainstem encoding of speech relative to good readers, and that these impairments were correlated with the perception of speech in noise. Children with language-based learning impairments are likely to have greater difficulty in noisy classrooms than typically developing children, and we have shown that this difficulty may be due to auditory nervous system dysfunction that affects both reading and speech-in-noise perception. Advances in our understanding of the biological bases of reading and hearing speech in noise can be applied clinically as speech-evoked brainstem responses can be used in assessing and monitoring the treatment of individuals with impairment of these abilities. 

Nina Kraus , PhD, is a Hugh Knowles Professor at Northwestern University. Her auditory neuroscience research laboratory investigates the biological basis of speech and music perception and experience-dependent brain plasticity. Contact her at nkraus@northwestern.edu. For more details about her research, visit www.brainvolts.northwestern.edu.

Jane Hornickel, BA, is a PhD candidate in communication sciences at Northwestern University and member of the auditory neuroscience lab. Her research focuses on the relationships between auditory functions and reading while investigating the impact of a classroom assistive listening device on both neural and behavioral function. Contact her at j-hornickel@northwestern.edu.

cite as: Kraus , N.  & Hornickel, J. (2010, September 21). Biological Markers of Reading and Speech-in-Noise Perception in the Auditory System. The ASHA Leader.

References

Banai, K., Hornickel, J., Skoe, E., Nicol, T., Zecker, S. G., & Kraus, N. (2009). Reading and subcortical auditory function. Cerebral Cortex, 19, 2699–2707.

Bradlow, A. R., Kraus, N., & Hayes, E. (2003). Speaking clearly for children with learning disabilities: Sentence perception in noise. Journal of Speech, Language, and Hearing Research, 46, 80–97.

Chandrasekaran, B., Hornickel, J., Skoe, E., Nicol, T., Kraus, N. (2009). Context-dependent encoding in the human auditory brainstem relates to hearing speech in noise: implications for developmental dyslexia. Neuron, 64, 311–319.

Evans, J. L., Saffran, J. R., & Robe-Torres, K. (2009). Statistical Learning in Children With Specific Language Impairment. Journal of Speech, Language, and Hearing Research, 52, 312–335.

Hornickel, J., Skoe, E., Nicol, T., Zecker, S., Kraus, N. (2009). Subcortical differentiation of stop consonants relates to reading and speech-in-noise perception [PDF]. Proceedings of the National Academy of Sciences, 106, 13022–13027.

Johnson, K., Nicol, T., Zecker, S. G., Bradlow, A. R., Skoe, E., & Kraus, N. (2008). Brainstem encoding of voiced consonant-vowel stop syllables. Clinical Neurophysiology, 119, 2623–2635.

Krishnan, A., Xu, Y., Gandour, J., & Cariani, P. (2005). Encoding of pitch in the human brainstem is sensitive to language experience. Cognitive Brain Research, 25, 161–168.

Miller, G. A. & Nicely, P. E. (1955). An analysis of perceptual confusions among some English consonants. Journal of the Acoustical Society of America, 27, 338–352.

Parbery-Clark, A., Skoe, E., & Kraus, N. (2009). Musical experience limits the degradative effects of background noise on the neural processing of sound. Journal of Neuroscience, 29, 14100–14107.

Russo, N., Nicol, T., Zecker, S. G., Hayes, E., & Kraus, N. (2005). Auditory training improves neural timing in the human brainstem. Behavioural Brain Research, 156, 95-103.

Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-olds. Science, 274, 1926–1928.

Shapiro, B., Church, R. P., & Lewis, M. E. B. (2007). Specific Learning disabilities. In M.L. Batshaw, L. Pellegrino, & N.J. Roizen, (Eds). Children with disabilities, (6th ed., pp. 367–385). Baltimore: Paul Brookes.

Shield, B. M. & Dockrell, J. E. (2003). The effects of noise on children at school: A review. Journal of Building Acoustics, 10, 97–106.

Skoe, E. & Kraus, N. (2010). Auditory brainstem response to complex sounds: a tutorial. Ear and Hearing, 31, 302–324.

Tallal, P. & Piercy, M. (1974). Developmental aphasia: Rate of auditory processing and selective impairment of consonant perception. Neuropsychologia, 12, 83–93.

Tallal, P. & Piercy, M. (1975). Developmental aphasia: The perception of brief vowels and extended stop consonants. Neuropsychologia, 13, 69–74.

Ziegler, J. C., Pech-Georgel, C., George, F., & Lorenzi, C. (2009). Speech-perception-in-noise deficits in dyslexia. Developmental Science, 12, 732–745.



  

Advertise With UsAdvertisement