This article is part of a series contributed by Special Interest Division 6, Hearing and Hearing Disorders: Research and Diagnostics. With a focus on basic and applied research, Division 6 provides a forum in which clinicians may suggest areas in need of further research and develop productive collaborations.
It is true—not all speech tests are created equal and many have their own niche. Tests measure speech-in-quiet, speech-in-noise, and many other domains of auditory function and perception from the end organ to the cortex. Audiologists who evaluate children or adults for auditory processing disorders are familiar with the plethora of tests that have been designed to measure the different domains of auditory function. However, selecting an appropriate speech test for either the specific auditory domain of interest or the auditory domain for which the patient voices a complaint has not gained popularity in most audiology clinics.
During a presentation 10 years ago, James Jerger, an auditory processing researcher and a distinguished scholar-in-residence at the University of Texas at Dallas, stated, "In the last 50 years we have been to the moon and back, but speech testing has remained the same." This comment continues for the most part to capture the essence of progress in clinical speech audiometry—stagnant. Unfortunately, most audiologists still rely solely on the age-old speech-in-quiet technique with the materials presented at a level above the speech-recognition threshold or the pure-tone average. As recently pointed out by Roeser and Clark (2008), this speech-in-quiet technique is compounded negatively by the presentation of materials via monitored live voice (MLV) as opposed to the presentation of standardized, recorded speech materials. Presenting monosyllabic words via MLV exacerbates performance differences that naturally occur with the most commonly used materials (i.e., NU No. 6, CID W-22s, PB-50s) because of speaker differences (Kruel et al., 1969).
Speech in Noise
Issues other than speaker characteristics are also important in the selection of a speech test. For example, most adults with hearing loss commonly complain of difficulty understanding speech in noise. Thus, in the evaluation of the auditory perceptual abilities of these individuals, the use of a speech-in-noise task is critical and certainly has face validity. Unfortunately, as audiologists (Strom, 2006) we have ignored the suggestion of our forefathers to incorporate a speech-in-noise instrument as part of basic testing protocol (Carhart & Tillman, 1970). Audiologists need to understand that speech-in-quiet and speech-in-noise are different domains of auditory function and need to be assessed with different instruments. Killion (2002) was right when he indicated that if you want to know how well an individual understands speech in background noise you must measure that function because recognition performance in noise cannot be predicted either from pure-tone data or from speech-in-quiet data.
Once an audiologist decides to utilize a speech-in-noise test in addition to speech-in-quiet testing, a few other decisions need to be made. Speech-in-noise tests vary in terms of the type of stimulus used—such as digits, words, sentences, and running speech—and in terms of the type of noise—such as speech-spectrum noise or multitalker babble. Studies have shown that different speech-in-noise tests may be more appropriate for different populations.
For example, a recent study from our labs (Wilson et al., 2007a) showed that the Words-in-Noise Test (WIN, Wilson, 2003; Wilson & McArdle, 2007) and the Quick Speech-in-Noise Test (QuickSIN, Killion et al., 2004) were more sensitive at separating listeners with normal hearing from listeners with hearing loss than the Hearing in Noise Test (HINT, Nilsson et al., 1994) or the BKB-Speech in Noise Test (BKB-SIN, Etymotic Research, 2005).
The conclusion was that the stimuli for the HINT and the BKB-SIN were so rich in semantic context that individuals with hearing loss may have been using top-down processing that improved their performance. Therefore, test instruments that are sensitive to bottom-up processing, such as the WIN and QuickSIN, are more informative for understanding basic auditory function. In contrast, instruments such as the HINT and BKB-SIN, which do not tax bottom-up processing, are more informative in situations in which top-down processing is important, as with the evaluation of potential cochlear-implant patients.
Because most of the speech-in-noise tests involve sentence materials, the WIN was developed with the Northwestern University Auditory Test No. 6 monosyllabic words recorded by a female speaker (Department of Veterans Affairs, 2006). The WIN involves a multiple signal-to-noise ratio (SNR) descending paradigm that targets the 50% correct metric, which is used to quantify hearing loss in terms of SNR (i.e., the SNR at which 50% correct was achieved).
As with the speech-in-noise materials, all masking noises are not created equal. For example, the masking effectiveness of speech-spectrum noise and multitalker babble equated in rms was evaluated with the WIN (Wilson et al., 2007b). For listeners with normal hearing, speech-spectrum noise was the more effective masker by ~3 dB, whereas there was no difference on listeners with hearing loss in the amount of masking produced by the two maskers. This finding supports the notion that listeners with normal hearing are able to extract perceptual speech cues from an amplitude-modulated masker during those brief intervals in which the SNR is more favorable, i.e., during the "valleys" of the modulation cycle, whereas listeners with hearing loss are unable to use the perceptual cues available during the valleys in the modulation cycle.
Domains of Auditory Function
To audiologists the words "hearing loss" usually evoke thoughts of pure-tone sensitivity, which rightfully should be the guiding—but not the only—domain of auditory function evaluated. In the course of this brief commentary, we have mentioned the importance of assessing "hearing loss" in various domains of auditory function from end-organ sensitivity through cognitive function. Following sensitivity issues, the ability to understand speech in quiet, expressed as percent-correct recognition, is the second-most common domain of auditory function accessed in the course of an audiological evaluation. We and others are suggesting that the ability to understand speech in background noise is a third domain of auditory function that should be accessed routinely and expressed as the decibel SNR hearing loss.
In the future, other domains of auditory perceptual function may be accessed routinely with other speech paradigms that may include compressed speech, filtered speech, speech in interrupted maskers, dichotic speech, speech in the masking-level difference paradigm, and speech tasks involving auditory working memory and auditory attention. Including additional test instruments will increase the time spent with a patient, but we must consider that the auditory system and auditory function are complex and difficult to evaluate thoroughly with "pure tones and speech (in quiet)."
Acknowledgment: The U.S. Department of Veterans Affairs supported this research through awards to both authors.