April 6, 2010 Feature

Reducing Noise Interference

Strategies to Enhance Hearing Aid Performance

see also
An acoustically treated wind tunnel
KEMAR is fit with a pair of hearing aids in an anechoic chamber.

Listening in background noise can be a challenge for people with hearing loss because they often need a higher signal-to-noise ratio (SNR) than people with normal hearing to understand the same amount of speech. Many high-performance hearing aids are implemented with digital signal processing algorithms to reduce the interference of continuous, transient, and wind noise.

Distinguishing between different technologies is essential for effective hearing aid fitting, but can be confusing and difficult because the launch of new technologies is rarely accompanied by detailed explanations of their rationales and mechanisms. In addition, different marketing names may be used to describe algorithms that are implemented with similar functions and/or computational methods, or similar names may be used to describe different algorithms. Several major types of noise reduction strategies are available that can potentially help hearing aid users improve speech understanding and enhance perceived sound quality.

Detection and Classification

In digital signal processing algorithms, the signal detection and classification unit analyzes the spectral, temporal, and/or amplitude characteristics of the microphone input and classifies the signal as speech, music, noise, or other sound scenes. The results are then compared with a set of decision rules that specify the actions and time constants of the hearing aid under different circumstances. The time constants of an algorithm govern the:

  • Time between the detection of a sound scene and the execution of the decision rules (e.g., a noise reduction algorithm decides to reduce gain one second after an air-conditioning unit is turned on).
  • Speed of execution of a decision rule (e.g., the noise reduction algorithm takes 20 seconds to reduce the designated amount of gain).
  • Time between the detection of a change in the sound scene and the release of the current set of decision rules (e.g., the noise reduction algorithm decides to release the gain reduction within five milliseconds [ms] when the air condition unit is turned off).
  • Speed of releasing the current set of decision rules (e.g., the noise reduction algorithm takes 10 ms to return to 0 dB gain reduction).

Directional Microphones

Directional microphones, the most effective noise reduction strategy in hearing aids, are reported to provide a 3–4 dB SNR improvement in real-world environments with low reverberation (Valente, 1999; Ricketts, 2001; Chung, 2004). Many digital hearing aids are implemented with first-order directional microphones that consist of two omnidirectional microphones arranged in front-back arrays. The distance between the two microphones—the external delay—determines the time required for sound waves to travel between the microphones. The back microphone output is delayed and subtracted from that of the front microphone. Effective subtraction occurs when both the frequency response and the phase of the two omnidirectional microphones are matched. The electronic delay applied to the back microphone—the internal delay—can be fixed or adaptive. The relative values of internal and external delays determine the microphone sensitivity to sounds from different directions, which is often displayed in polar patterns (Figure 1 [PDF]).

Adaptive Directional Microphone Algorithms

Adaptive directional microphone algorithms (or simply adaptive directional microphones) are designed to change polar patterns by altering the value of the internal delay. They should not be confused with automatic microphone switching algorithms that switch between the omnidirectional and directional microphone modes. The goal of adaptive directional microphones is to reduce the background noise level by steering the least sensitive direction (i.e., null) of the microphone to the direction of the noise source(s). The resulting polar pattern can be bipolar, cardioid, hypercardioid, supercardioid, or any other directivity pattern. In the presence of multiple noise sources, the algorithms may resort to a cardioid or a hypercardioid pattern (Chung, 2004). Adaptive directional microphones typically have a constraint that keeps the microphone most sensitive to sounds from the front, but some manufacturers have recently released this constraint so that users can hear speech coming from the side (e.g., for listening in the car).

Studies have found that adaptive directional microphones are superior to conventional directional microphones with fixed polar patterns when there are only a few noise sources, but the advantage is reduced or disappears in the presence of multiple noise sources (Ricketts & Henry, 2002; Chung & Zeng, 2009). Additionally, adaptive directional microphones may not be able to steer the null to the dominant noise source in the presence of other lower-level noise sources (Bentler et al., 2004), offering no additional benefit compared to directional microphones with fixed polar patterns.

Multichannel adaptive directional microphones are capable of simultaneously adapting to many polar patterns by adopting different internal delay values in different signal processing channels. The assumptions are that noise sources from various directions may have different frequency content and that maximum noise reduction can be achieved by adapting the null of the directional microphone to the corresponding noise direction in each channel. Theoretically, adaptive directional microphones with higher numbers of channels are more effective in reducing background noise in acoustically complex environments. On a practical level, however, it is not feasible to compare the effect of a number of channels on user performance in commercially available hearing aids because of differences in implementation (e.g., signal detection and classification criteria, decision rules, and time constants) and possible interactions among other hearing aid parameters.

Adjustable Beamwidth Algorithm

Adjustable beamwidth algorithms employ various strategies to reduce the perception of unwanted dominant sounds coming from the back and sides. In one implementation, the adjustment of beamwidth is accomplished by gain reduction. The default polar pattern of the directional microphone is hypercardioid. The algorithm constantly monitors the angle of incidence of the most intense sound in each signal processing channel by computing the phase difference between the signals at the two omnidirectional microphone outputs. If the estimated angle of incidence is greater than the beamwidth chosen in the hearing aid fitting software (e.g., ±25°, ±35°, ±50° azimuth), a gain reduction is applied to the frequency channel. The amount of gain reduction varies from 1 to 24 dB as the intense sound moves from the sides to the back and the gain reduction is accomplished within 50–70 ms. The audiologist can set the algorithm to be always activated or activated only if the overall environmental sound pressure level exceeds 60 dB SPL.

In another implementation, the beamwidth is adjusted by changing the polar pattern of the directional microphone. The algorithm constantly monitors the overall level of the incoming signal and the angle of incidence of the most intense sound. If the overall level is less than 55 dB SPL (i.e., in quiet), a polar pattern with the widest beamwidth is adapted. As the level increases, the beamwidth is reduced by switching to polar patterns with increasingly narrower beamwidth to the front. The minimum beamwidth is adopted if the overall level exceeds 75 dB SPL. At any instant, if the estimated angle of incidence of the most intense sound is within the determined beamwidth, the polar pattern is frozen. Otherwise, the algorithm adapts to different polar patterns at different channels in an attempt to suppress the most intense sound.

Combination Microphones

Some manufacturers provide different options to combine directional and omnidirectional microphones. One combination is to fit one ear with a directional microphone and the other one with an omnidirectional microphone. The directional microphone reduces noise interference and the omnidirectional microphone provides awareness of environmental sounds. Several studies reported that this combination yielded speech recognition scores close to or comparable to the use of directional microphones binaurally (Bentler et al., 2004; Cord et al., 2007; Hornsby & Ricketts, 2007), suggesting that this configuration may be a viable amplification option.

Another combination is to use the omnidirectional microphone mode for low-frequency sounds and the directional microphone mode for high-frequency sounds. The manufacturer claims that this configuration offers the user awareness of low-frequency speech and environmental sounds while taking advantage of the directional effect in the high frequencies. Future studies are needed to examine if this is an effective amplification option.

Many factors can influence the effectiveness of directional microphones. Some field studies reported that hearing aid users may not perceive directional benefit in daily life because the location of the primary talker, the type of background noise, and the acoustic environment can affect hearing aid users' directional or omnidirectional microphone preferences (Walden et al., 2000; Surr et al., 2002). The polar patterns of directional microphones worn on the head are different from those measured in the free field and exhibit different patterns across frequency regions due to head shadow and body baffle effects. As reverberation increases, directional effect decreases because indirect sounds reflected from walls or other reflective surfaces can obscure the true direction of sounds (Ricketts, 2002; Chung, 2004). Directional effect also decreases as vent size increases because processed sounds leak out and unprocessed sounds enter the ear canal through the vent. Nevertheless, even directional microphones implemented in open-fit hearing aids are reported to provide significant benefits compared to omnidirectional microphones and unaided conditions (Klemp & Dhar, 2008; Valente & Mispagel, 2008).

In addition, although directional microphones with fixed polar patterns may improve the ability to localize sounds compared to omnidirectional microphones (Chung et al., 2008), adaptive directional microphones and use of directional microphones with different polar patterns in two ears can compromise users' localization ability (Keidser et al., 2006; Van den Bogart et al., 2006). Further, microphone drift can occur over time and reduce the directional effect because the frequency response and phase of the two omndirectional microphones may no longer be matched for effective subtraction. Automatic microphone matching algorithms designed to match the frequency responses (and phases) of the microphones can potentially help alleviate this problem.

Noise Reduction Algorithms

Many hearing aids are implemented with digital noise reduction algorithms to reduce noise interference and improve listening comfort. Digital hearing aids typically use four major types of noise reduction algorithms.

Modulation-based

Modulation-based noise reduction algorithms, in general, are reported to reduce listening effort and aversiveness of sounds, and increase listening comfort and cognitive capability in background noise (Boysman & Dreschler, 2000; Palmer et al., 2006; Sarampalis et al., 2009). These algorithms received their names from their signal detection and estimation methods. They use modulation rates to identify the presence or absence of speech and use modulation depth to estimate the SNR in the incoming signal.

During speech production, the vocal tract changes shape to produce different sounds. The opening and closing of the vocal tract generates a slow modulation in the speech envelope, typically at the rate of 2 to 10 Hz. Sounds with modulation rates outside of this range are assumed to be noise (e.g., car noise, jackhammer). Additionally, the unit estimates the SNR of the incoming signal by tracking the peaks (maxima) and valleys (minima) of the signal. A high SNR is inferred if there is a large difference between the maxima and minima (i.e., high modulation depth), and vice versa.

Once the modulation rate and the modulation depth are determined, this information is compared with a set of decision rules. If the modulation rate is within the speech range (i.e., speech is present) and the modulation depth is high (i.e., high SNR) in a frequency channel, the gain of the channel is minimally altered. If the modulation rate is outside of the range of speech (i.e., speech is absent) and/or modulation depth is low (i.e., low SNR), the gain of the channel is reduced. The amount of gain reduction differs among algorithms and may depend on the modulation rate, the modulation depth, the overall level of the incoming signal, the frequency weighting for speech intelligibility, types of signal and background noise, and the noise reduction level set in the fitting software (Chung, 2004; Bentler & Chiou, 2006).

Modulation-based noise reduction algorithms also exhibit vastly different time constants. Some algorithms take several seconds to start reducing the gain and the maximum gain reduction is achieved in several gain reduction stages and/or more than 30 seconds. Others may start reducing the gain within a few seconds and achieve maximum gain reduction within several seconds.

Modulation-based noise reduction algorithms are capable of increasing the SNR in the overall signal but not the SNR within a signal processing channel. These algorithms reduce noise interference by reducing the contribution of the noise-dominant channels to the overall signal. They do not increase the within-channel SNR, however, because the same amount ofgain reduction is applied to both speech and noise in a channel. These algorithms, therefore, do not enhance speech intelligibility in noise (Boysman & Dreschler, 2000; Walden et al., 2000) because human auditory filters typically have narrower bandwidths than the frequency channels in hearing aids. If, however, the algorithm has more signal processing channels than the signal delivery channels (such as in cochlear implants), modulation-based noise reduction algorithms can potentially increase the SNR within a signal delivery channel and improve speech intelligibility (Chung, Zeng, & Acker, 2006).

Chung (2007) reported that some noise reduction algorithms provide less noise reduction when wide dynamic range compression is activated (i.e., serial configuration) but others provide the same amount of noise reduction in both compression and linear conditions (i.e., parallel configuration). Some algorithms with serial configuration, however, can still provide larger amount of noise reduction than some algorithms with parallel configuration in the compression condition. If the functions of noise reduction algorithms are important for a user, audiologists need to consider the noise reduction and wide dynamic range compression configuration during the hearing aid selection process and determine the amount of noise reduction using the individual user's listening program in the fitting process.

Spectral Subtraction

Spectral subtraction noise reduction algorithms are named by their decision rules for gain reduction. They include a group of algorithms (such as the adaptive Wiener filters) that are designed to estimate the "noise" spectrum in the incoming signal and to shape the "speech in noise" spectrum to closely resemble the "clean speech" spectrum.

Ideally, spectral subtraction is carried out by subtracting a "noise" with a known spectrum from a "speech in noise" spectrum to obtain the "clean speech" spectrum. As it is impossible to calculate the exact noise spectrum in running speech, the algorithm estimates the "noise" spectrum by observing the noise between gaps of speech and updating this estimate frequently. The gain of each frequency channel is then adjusted so that the "speech in noise" spectrum approximates the "clean speech" spectrum. The amount of gain reduction depends on the estimated SNR in the "speech in noise" signal, the accuracy of the "noise" estimation, and other factors. The time constants for the gain reduction are typically much faster than those used in modulation-based noise reduction algorithms (Kates, 2008). Future studies are needed to determine the effectiveness of spectral subtraction algorithms on users' speech recognition performance and perceived sound quality.

Transient

Transient noise reduction algorithms are designed to reduce the level and the annoyance of transient impulse noises such as glass clinking, door-banging, or hand-clapping. The amplitude-time envelope of transient noises are characterized by rapid rise times (e.g., a level increase of more than 40 dB within 50 ms) and a peak level that is much higher than the long-term root-mean-square (RMS) level of other sounds. The signal detection and classification unit of these algorithms uses these characteristics to infer the presence of transient noise and reduces the gain in the frequency channel(s) with transient energy. The amount of gain reduction typically depends on the peak-to-RMS ratio, the rise time, and the level of the transient noise.

The associated time constants are usually in the order of milliseconds. Some algorithms take advantage of the 5–6 ms signal processing delay of digital hearing aids by detecting the rise time and the peak-to-RMS ratio before the signal is band-pass filtered. If a transient noise is detected, the decision to reduce the gain is executed before the digital signal exits the receiver and the gain reduction is quickly released. Because gain reduction occurs only during the presence of the transient noise, the audibility and intelligibility of speech are generally not affected in well-designed algorithms.

Wind

Wind noise generated by random turbulent flow across the hearing aid microphone(s) can be a mere annoyance or a debilitating interference for hearing aid users. The signal detection and classification unit of wind noise reduction algorithms typically identify the presence of wind noise by computing the correlation between the outputs of the two omnidirectional microphones that form directional microphones and infer the presence of wind noise if the correlation is low.

Common wind noise reduction strategies include the use of:

  • Low-frequency or high-pass filtering because wind noise is dominated by low-frequency sounds.
  • Omnidirectional microphones in all frequency channels or only in low-frequency channels because omnidirectional microphones are reported to have lower wind noise levels (Beard & Nepomuceno, 2001; Thompson & Dillon, 2002; Chung et al., 2009, 2010).
  • Microphones with low amplitude outputs (i.e., less sensitive microphone) as the input of hearing aid.
  • Summed output of the two omnidirectional microphones as the input of the hearing aid because when correlated signals (sounds in the far field) are summed, the overall level increases by 6 dB. When uncorrelated signals (wind noise) are summed, the level increases by 3 dB only. Adopting this strategy effectively reduces wind noise by 3 dB (Kates, 2008).

To further reduce wind noise, hearing aid users can avoid positions with the highest wind noise levels and orient themselves to positions with lowest levels. Recent studies show that the highest turbulent flow noises are measured when the flow (wind) is from the front and the back for both custom and behind-the-ear hearing aids and when the head was turned to angles between 190° and 250° for custom hearing aids (see Figure 2 online [PDF]). The lowest noise level was measured when custom and behind-the-ear hearing aids are facing the direction of the flow and when behind-the-ear hearing aids are facing downstream (Chung et al., 2009, 2010).

Advances in technology offer many new solutions for different types of background noises. Digital hearing aids are generally rated to be more comfortable, generate better performance in noise, and yield higher overall satisfaction than non-digital hearing aids (Kochkin, 2005). Successful applications of these technologies depend on scientists' continuing research and development efforts to improve the effectiveness of the algorithms. It is also critical that audiologists understand the nuances, advantages, and limitations of different algorithms when selecting hearing aids that will suit the users' individual needs and lifestyles.

King Chung, PhD, CCC-A, is an assistant professor at Northern Illinois University. Her research interests are wind noise measurements and reduction, and the application of signal processing technologies to enhance cochlear implants, hearing aids, and hearing protectors. Contact her at kchung@niu.edu.

cite as: Chung, K. (2010, April 06). Reducing Noise Interference : Strategies to Enhance Hearing Aid Performance. The ASHA Leader.

References

Beard, J., & Nepomuceno, H. (2001). Wind noise levels for an ITE hearing aid. Knowles Engineering Report, 128, Revision A.

Bentler, R., & Chiou, L.K. (2006). Digital noise reduction: an overview. Trends in Amplification,10(2), 67–82.

Bentler, R.A., Egge, J.L.M., Tubbs, J.L., & Dittberner, A.B., & Flamme, G.A. (2004). Quantification of directional benefit across different polar response patterns. Journal of American Academy of Audiology, 15, 649–659.

Bentler, R.A., Tubbs, J.L., Egge, J.L.M., Flamme, G.A., & Dittberner, A.B. (2004). Evaluation of an adaptive directional system in a DSP hearing aid. American Journal of Audiology, 13, 73–79.

Boymans, M., & Dreschler, W.A. (2000). Field trials using a digital hearing and with active noise reduction and dualmicrophone directionality. Audiology 39, 260–268.

Chung, K. (2004). Challenges and recent developments in hearing aids Part I: Speech understanding in noise, microphone technologies and noise reduction algorithms. Trends in Amplification, 8(3), 83–124.

Chung, K. (2007). Effective compression and noise reduction configurations for hearing protectors. Journal of Acoustical Society of America, 121(2), 1090–1101.

Chung, K., Mongeau, L., & McKibben, N. (2009). Wind noise in hearing aids with directional and omnidirectional microphones: polar characteristics of behind-the-ear hearing aids. Journal of Acoustical Society of America, 125(4), 2243–2259.

Chung, K., McKibben, N. & Mongeau, L. (2010, in press). Wind noise in hearing aids with directional and omnidirectional microphones: polar characteristics of custom-made hearing aids. Journal of Acoustical Society of America.

Chung, K., Neuman, A. & Higgins, M. (2008). Effects of in-the-ear microphone directionality on sound direction identification. Journal of Acoustical Society America, 123(4), 2264–2275.

Chung, K. & Zeng, F-G. (2009). Using adaptive directional microphones to enhance cochlear implant performance. Hearing Research, 250, 27–37.

Chung, K., Zeng, F-G., & Acker, K.N. (2006). Effects of directional microphone and adaptive multi-channel noise reduction algorithm on cochlear implant performance. Journal of Acoustical Society of America, 120(4), 2216–2227.

Cord, M.T., Walden, B.E., Surr, R.K., & Dittberner, A.B. (2007). Field evaluation of an asymmetric directional microphone fitting. Journal of American Academy of Audiology, 18(3), 245–256.

Kates, J.M. (2008). Digital Hearing Aids. San Diego: Plural Publishing.

Keidser, G., Rohrseitz, K., Dillon, H., Hamacher, V., Carter, L., Rass, U., & Convery, E. (2006). The effect of multi-channel wide dynamic range compression, noise reduction, and the directional microphone on horizontal localization performance in hearing aid wearers. International Journal of Audiology, 45(10), 563–579.

Klemp, E.J., & Dhar, S. (2008). Speech perception in noise using directional microphones in open-canal hearing aids. Journal of American Academy of Audiology, 19(7), 571–578.

Kochkin, S. (2005). MarkeTrak VII: Customer satisfaction with hearing instruments in the digital age. Hearing Journal, 58(9), 30–42.

Hornsby, B.W., & Ricketts, T.A. (2007). Effects of noise source configuration on directional benefit using symmetric and asymmetric directional hearing aid fittings. Ear and Hearing,28(2), 177–186.

Palmer, C.V., Bentler, R., & Mueller, H.G. (2006). Amplification with digital noise reduction and the perception of annoying and aversive sounds. Trends in Amplification, 10(2), 95–104.

Ricketts, T.A. (2001). Directional hearing aids. Trends in Amplification, 5(4), 139–176.

Ricketts, T.A., & Henry, P. (2002). Evaulation of an adaptive directional-microphone hearing aid. International Journal of Audiology, 41, 100–112.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research, 52(5), 1230–1240.

Surr, R.K., Walden, B.E., Cord, M.T., & Olson, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of American Academy of Audiology, 13(6), 308–322.

Thompson, S., & Dillon, H. (2002). Wind noise in hearing aids. Presented at American Academy of Audiology Convention, Philadelphia, PA.

Valente, M. (1999). Use of microphone technology to improve user performance in noise. Trends Amplification, 4(3), 112–135.

Valente, M., & Mispagel, K. M. (2008). Unaided and aided performance with a directional open-fit hearing aid. International Journal of Audiology, 47(6), 329–336.

Van den Bogaert, T., Klasen, T.J., Moonen, M., Van Deun, L. Wouters, J. (2006). Horizontal localization with bilateral hearing aids: without is better than with. Journal of Acoustical Society of America, 119(1), 515–526.

Walden, B., Surr, R., Cord, M., Edwards, B., & Olson, L. (2000). Comparison of benefits provided by different hearing aid technologies, Journal American Academy of Audiology, 11(10), 540–560.



  

Advertise With UsAdvertisement