Auditory development is a prolonged process, despite the precocious development of the inner ear. Audiologists know that infants don't respond to sound at the low intensities to which adults will respond. What hearing scientists have learned in 25 years of studying the development of hearing in infants and children is that youngsters' immature thresholds in the sound booth reflect immature hearing, not just immature responses. These immaturities limit infants' ability not only to detect a tone, but also to hear and to learn from sound in real environments. Moreover, the process of auditory development continues well into the school years, as children become more selective and more flexible in the way that they process sound.
Clinicians implicitly understand that infants and children hear differently from adults, and this understanding shapes their interactions with infants and children. Research in auditory development has broader implications for clinical and educational practice—as well as public policy—as professionals work to reduce noise levels in homes and in schools and raise awareness of the effect of competing sound on infants' and children's ability to process speech.
Auditory development progresses through three stages. During the first stage, the ability of the auditory system to encode sound precisely becomes mature. This stage lasts from full-term birth to about 6 months of age, and involves maturation of the middle ear and of the brainstem auditory pathways. During the second stage, from 6 months to about 5 years of age, the ability to focus on or select one feature of sound matures. During the third stage, from 6 years into adolescence, the ability to use different sound features flexibly under changing listening conditions matures. Both the second and third stages involve maturation of auditory cortex and central processing.
Stage 1: Maturation of Sound Coding
Newborns' impressive ability to discriminate between speech sounds, to recognize voices, and even to recognize their native speech has been well- documented. Clearly, infants come into postnatal life ready to listen to sound and to learn from it. This process likely begins before birth. However, studies that have tested newborns' discrimination of changes in the details of speech suggest that their representations of sound are coarser than adults' in some ways. For example, they are more likely to notice a change in a syllable if the vowel changes rather than a consonant (Bertoncini, Bijeljac-Babic, Jusczyk, Kennedy, & Mehler, 1988).
From examining very basic auditory abilities, researchers know that young infants' thresholds for detecting sound are higher than adults' and that their ability to separate or discriminate sounds of different frequencies is immature, particularly at frequencies above 3000 Hz than at lower frequencies (e.g., Olsho, Koch, Carter, Halpin, & Spetner, 1988; Olsho, Koch, & Halpin, 1987). Studies of the acoustical response of the ear of young infants point to the middle ear as a source of immature thresholds in quiet. The middle ear of an infant is less efficient than that of an adult in transmitting sound to the inner ear (Keefe, Bulen, Arehart, & Burns, 1993). The efficiency of high-frequency sound transmission through the middle ear improves considerably in the first year of life, with smaller progressive improvements across the frequency range of hearing continuing well into childhood (Okabe, Tanaka, Hamada, Miura, & Funai, 1988).
Interestingly, the inner ear seems to be mature in newborns. Nonetheless, electrophysiological measures show a broader neural response to high-frequency sounds, matching the results of behavioral studies of infants (e.g., Abdala & Folsom, 1995). Furthermore, transmission time of the neural response through the brainstem auditory pathway is correlated with young infants' ability to detect a high-frequency sound (Werner, Folsom, & Mancl, 1994).
Limitations in basic auditory abilities would be expected to limit the precision with which a young infant can represent a complex sound, such as speech. Researchers speculate that one reason adults speak slowly, more clearly, and at a higher intensity to infants is to compensate for infants' immature hearing.
Stage 2: Maturation of Selective Listening and Discovering New Details in Sound
By the time an infant is 6 months old, middle ear efficiency has improved and the transmission of information through the brainstem seems mature. However, behavioral tests of hearing still find higher response thresholds, in quiet and in noise, for infants at this age—and in fact, for children up to 4 years old (e.g., Schneider & Trehub, 1992). A small part of this immature sound detection can be due to simple inattentiveness, or infants' not being on task at all times during the test. However, most of the difference seems to result from the way infants listen to sound.
While an adult will focus on the frequencies in a sound that are expected to allow them to identify the sound, infants tend to listen in a broadband way. They listen to all frequencies rather than selecting the most informative. This difference is demonstrated in a simple task in which infants and adults learn to respond to a tone in noise (Bargones & Werner, 1994). On a large majority of the trials, the tone is presented at one "expected" frequency, but on some trials, a tone at a different, "unexpected" frequency is presented. Adults tend not to hear the unexpected frequencies, while infants detect the expected and unexpected frequencies equally well. The interpretation of this result is that infants listened for a broad range of frequencies, while adults listened only to the frequency at which they expected the signal to be presented.
Could it be that infants just don't form expectations about sound as adults do? Infants' performance in other tasks suggests that they not only form expectations but also direct their attention to increase their sensitivity to sound under some conditions. For example, if a short burst of noise is presented to cue the listener that the target sound is about to occur, both infants and adults could detect the target sound better when it occurred at the expected time, rather than at a slightly earlier- or later-than-expected time (Parrish & Werner, 2004). This finding suggests that the infants learned that the sound they were supposed to detect usually occurred at a specific time and that they listened for the sound at that time but not at other times. This conclusion means that infants have the capacity to listen selectively under certain conditions.
If infants can direct their attention to a particular time, why don't they direct their attention to a particular frequency? Researchers speculate that it is maladaptive for infants to listen selectively to a sound like speech, in which the important frequencies change depending on the speaker, the context, the language, and other factors. It may be more sensible for infants to continue to listen broadly to speech until considerable listening experience in many situations allows them to learn where the important speech cues occur. In fact, research suggests that adults learning a second language have difficulty in part because they listen to the aspects of speech they have learned to listen to in their native language, while ignoring cues in other frequency ranges that are important for the second language (e.g., Best, McRoberts, & Sithole, 1988).
One result of infants' broadband listening is that it makes it difficult for them to separate target sounds from competing sounds. Adults can have problems separating a target from competing sounds when the competing sounds change over time, and infants also have special difficulties when the competing sounds vary. For infants, though, just having a competing sound in the background seems to make it difficult for them to hear a target. For example, the presence of a competing sound, even one that is far from the target sound in frequency, increases infants' threshold for the target sound (L. J. Leibold & Werner, 2006). This susceptibility to interference from competing sounds appears to continue until children are 4 or 5 years old (L. Leibold & Neff, in press). This finding implies that learning about sound will be more difficult for infants and preschool children in noisy environments and those in which there are several competing sources of sound. Research underway in several laboratories is attempting to determine whether infants and children are able to use some of the strategies that adults use to separate target and competing sounds.
The development of selective listening involves not only picking out one sound among several, but also listening to the details in complex sounds such as speech. In a series of studies, Nittrouer (2006) has shown that young children tend to make decisions about the identity of a syllable or a word on the basis of global acoustic differences rather than on fine acoustic details. Nittrouer's findings are consistent with the idea that children do not focus in on specific frequencies. Apparently, it is only with years of exposure under a variety of conditions that children notice the details in speech.
Stage 3: Maturation of Perceptual Flexibility
By school age, children appear to have mastered selective listening: they are no longer as influenced by background sounds as younger children and they appear to focus on informative aspects of sound. School-aged children are still less consistent than adults in the way they categorize speech sounds, and researchers can still identify listening conditions that are more difficult for school-aged children than for adults.
Children are less consistent than adults in identifying speech sounds because once they have discovered the multiple redundant acoustic differences between sounds, they have trouble when all of those differences are not available to them. For example, Hazan and Barrett (2000) found that when they synthetically altered syllables so that they are distinguished by only one acoustic cue, 6-year-old children were much less consistent in identifying the syllables than when multiple acoustic cues were available. Older children and adults were as consistent at categorizing the syllables with one cue as with multiple cues. Similarly, in the presence of noise or reverberation, some speech cues may be difficult to hear because of masking or distortion, while others remain usable. Under such conditions, adults can switch to the more reliable cue, while children apparently cannot.
Finally, speech perception may be a relatively automatic process for young adults, based on years of practice. For school-aged children, however, perceiving speech in difficult listening situations may be less automatic, requiring greater attention and allocation of more processing resources. Any additional demands on attention may be impossible for children to manage. Wightman and Kistler (2005) showed recently that adults could separate two voices presented to one ear; their ability to do so was little affected by yet another voice presented to the opposite ear. Children ages 6-9, in contrast, could separate the two voices in one ear fairly well, but their performance deteriorated markedly when another voice was added to the opposite ear. One explanation of this result is that the adults could separate the original two voices because they had sufficient processing resources to block out the voice in the opposite ear, while the children required so much effort to separate the original two voices that they had no processing resources available to block out the third voice.
Implications of Auditory Development
Developmental studies of infants and young children are beginning to explain why difficult listening conditions are nearly always more challenging for children than for adults. Early in infancy, fundamental auditory processes limit infants' ability to represent the fine acoustic details in the sounds they hear. However, even after the auditory system is able to represent those details, infants and preschool children do not appear to use all the details available to them. It is as if the system remains unselective during this stage of development, so that children will learn to use the appropriate acoustic information even though the frequencies at which it will occur are uncertain. Finally, school-aged children seem to have the acoustic details available to them, and they are able to attend to those details. Auditory development in this final stage involves learning to use different details flexibly with changes in listening conditions and acquiring the practice needed to make speech perception an automatic process.
The results of these studies have implications in many realms. For the audiologist, they suggest that infants and children with hearing impairment need to hear the broadest possible range of frequencies to learn how to understand speech most effectively. For the speech-language pathologist, they suggest that children may not always hear all of the acoustic details in speech, even when those details are available to them. For those charged with designing the environments in which infants and children live and learn, they underscore the importance of reducing the levels of noise and reverberation to optimize auditory learning.