Various therapeutic medications damage the inner ear, including certain drugs used to fight cancer and life-threatening infectious diseases. Drug-related inner ear damage, or ototoxicity, results in auditory and/or vestibular dysfunction that is often permanent. Symptoms of ototoxicity include tinnitus, dizziness, and difficulty understanding speech in noise.
Unfortunately, ototoxic hearing loss may go unnoticed by patients until a communication problem becomes apparent, signifying that hearing loss within the frequency range important for speech understanding has already occurred. Similarly, by the time a patient complains of dizziness, permanent vestibular system damage probably has already occurred. Because symptoms of ototoxicity are poorly correlated with drug dosage, peak serum levels, and other toxicities, the only way to detect ototoxicity is by assessing auditory and vestibular function directly.
For patients with life-threatening illnesses that warrant treatment with ototoxic drugs, communication ability is a central quality of life issue. Identifying ototoxic damage early can improve treatment outcome by minimizing hearing loss progression, and by counseling and rehabilitation.
Initial ototoxic drug exposure typically affects cochlear regions coding the high frequencies. Continued exposure results in a spread of damage to progressively lower frequencies. Early identification of ototoxic hearing loss provides physicians the opportunity to adjust the therapeutic treatment in order to minimize or prevent hearing loss requiring rehabilitation, depending on a patient's overall treatment picture.
Monitoring hearing in patients receiving ototoxic drugs provides audiologists opportunities to counsel patients and their families regarding ototoxicity-induced hearing loss, tinnitus, and dizziness, communication strategies, and the synergistic effects of noise and ototoxic damage. Early identification and monitoring of ototoxic hearing loss also provides audiologists the opportunity to perform appropriate rehabilitation during and after treatment.
Many of the same considerations are required for the successful implementation of ototoxicity monitoring programs as for hearing conservation programs and newborn hearing screening programs. Perhaps most important is consideration of key questions related to the program's goals. These key questions include: What is the purpose of identifying ototoxic changes? What is the target population to be monitored? What are the methods to be used for identifying patients? What are the timelines to be used for baseline and monitoring tests? What are the tests to be used, and how can they be adapted for the target population in order to meet the program goals?
Defining the Purpose of the Program
The purpose of the program drives many decisions about program implementation. For example, if the purpose is to prevent or minimize spread of ototoxic hearing loss into frequencies important for understanding speech, including ultra-high-frequency audiometry in the test protocol may be warranted.
Ototoxic hearing loss, particularly in the pediatric population, may be tolerated in favor of survival. In such cases, family counseling and rehabilitation planning is a major goal. If the program is to include patient counseling regarding realistic expectations, communication strategies, and aural rehabilitation as soon as is practical, there must be mechanisms in place to communicate test results not only to a patient's medical provider, but also to the patient and family directly. Discussions with stakeholders such as the audiology, oncology, infectious disease, and nursing staff are critical for determining perceived program needs and developing appropriate program goals.
Defining the Target Population
The chemotherapeutic agents cisplatin and carboplatin, and certain aminoglycoside antibiotics have a high incidence of ototoxic hearing loss. A program might target patients scheduled to receive drugs showing high incidence of ototoxicity, as well as the ototoxic drugs prescribed most often at the particular hospital serviced by the program. In addition, a program might target individuals with risk factors for ototoxicity including age (children and the elderly), co-morbidities, poor general physical health, and treatment with multiple ototoxic agents. A target population comprising children, sedated adults, or patients confined to the hospitalized ward will affect the choice of tests to be used for ototoxicity monitoring as described below.
Methods for Identifying Patients
Two primary resources for patient identification include key medical staff and hospital pharmacy medication lists. Identifying patients for whom ototoxicity monitoring is an appropriate part of a therapeutic management plan requires a coordinated effort between the audiologist and members of the patient's health care team.
It is important, therefore, to establish and maintain a relationship with key medical personnel. This relationship is supported by education regarding the purposes and benefits of ototoxicity monitoring. Ideally, the medical or nursing staff will discuss ototoxicity monitoring evaluations with their patients and provide referrals for monitoring. Computer-generated pharmacy lists are also an excellent refe rral source, as such lists may include a patient's name, treatment medication, and location on the ward.
Timeline for Baseline and Monitoring Tests
Ototoxicity is determined by comparing baseline data, ideally obtained prior to ototoxic drug administration, to the results of subsequent monitoring tests. In this way, each patient serves as his or her own control.
ASHA's "Guidelines for the Audiologic Management of Individuals Treated with Cochleotoxic Drug Therapy" (1994), based in part on the results of large clinical studies, state that the Baseline Evaluation should occur no later than 24 hours after the administration of chemotherapeutic drugs and no more than 72 hours following administration of aminoglycoside antibiotics. A recheck of thresholds within 24 hours of the Baseline Test can be helpful for determining patient reliability for pure-tone threshold testing.
The frequency of Monitoring Evaluations depends upon a patient's particular drug regimen, which can be determined by reviewing the patient's medical chart. Monitoring Evaluations, which may be a pared-down version of the Baseline Evaluation, are performed periodically throughout treatment, usually prior to each dose for chemotherapy patients, and 1ÿ2 times per week for patients receiving ototoxic antibiotics.
Monitoring and appropriate referrals for further auditory and vestibular testing also are warranted any time a patient reports increased hearing difficulties, tinnitus, aural fullness, or dizziness. Confirming significant changes by retest will reduce false positive rates and is recommended by ASHA (1994). Post-treatment evaluations are necessary to confirm that hearing is stable because ototoxic hearing loss can occur up to 6 months following drug exposure.
Detecting changes in pure-tone thresholds directly using serial audiograms is considered the most effective indicator of ototoxic hearing loss, particularly when ultra-high frequency thresholds are included. The goal of serial monitoring tests for detection of ototoxic hearing loss is typically to categorize patients into two groups: those who exhibit hearing change or those who do not based on a cutoff or hearing change criterion value. Although the ASHA guidelines have been implemented in many clinical settings, use of well-accepted statistical methods for determining test performance in large groups of patients receiving ototoxic drugs and hospitalized (control) patients receiving non-ototoxic drugs will likely be required in order for standard criteria to be fully acknowledged.
Test performance for ototoxicity monitoring can be determined by examining the sensitivity and specificity obtained using a particular criterion threshold shift to identify ototoxic hearing loss. The percentage of times patients exhibiting hearing change are identified as showing change using a criterion threshold shift is a measure of that test's hit rate or sensitivity. Specificity or correct rejection rate refers to the percentage of times patients with stable hearing are correctly labeled using the criterion threshold shift.
Sensitivity and specificity have related diagnostic errors. Failure to correctly identify hearing change results in a miss; diagnosing a hearing change when hearing sensitivity is unaltered results in a false positive. The likelihood of making diagnostic errors in ototoxicity monitoring depends on how a criterion threshold shift relates to normal test-retest variability intrinsic to serial testing. A statistical method for examining test performance, which borrows from clinical decision theory, involves the construction of receiver-operator characteristic (ROC) curves, in which hit rates for a range of criterion threshold shifts can be plotted as a function of the corresponding false alarm rates.
For serial audiograms, ASHA (1994) developed criteria for a clinically significant hearing change based on results of large clinical research studies, reported normal test-retest variability in healthy subjects not receiving ototoxic drugs, and to a limited extent on ROC curves constructed for threshold shift data obtained in drug- or noise-exposed individuals. These criteria include: >20 dB pure-tone threshold shift at one frequency, >10 dB shift at two consecutive test frequencies, threshold response shifting to "no response" at three consecutive test frequencies. Change must be confirmed by retest.
The ASHA criteria employ a comparatively large (20 dB) single frequency threshold shift or smaller shifts at more than one frequency because threshold shifts for two or three frequency averages have been shown to increase test performance for detecting ototoxicity- and noise-induced hearing shifts. This is presumably because threshold shifts at adjacent test frequencies indicate more systematic change compared to shifts at any single frequency. The ASHA criteria include confirmation of test results because threshold shifts obtained on repeat tests are more likely to represent a true hearing change compared to results obtained on a single test.
The ASHA guidelines for ototoxicity monitoring emphasize the increased test sensitivity achieved using ultra-high-frequency monitoring to detect ototoxicity. Test-retest differences for ultra-high-frequency thresholds using modern equipment is generally reported to be within +10 dB for frequencies between 9 and 14 kHz. False positive rates indicating a change in ultra-high-frequency thresholds in subjects that were not exposed to ototoxic drugs is low in young and older adults, even when thresholds are tested on the hospital ward under controlled conditions.
Ultra-high-frequency sensitivity can be monitored in older children; however, test-retest variability is generally poorer in young children. Consequently, ultra-high-frequency testing in young children will likely result in lower sensitivity and higher false positive rates compared to adults.
Additional Factors to Consider
Effectiveness of particular test protocols for detecting and monitoring ototoxicity depends on a variety of factors in addition to test sensitivity and specificity. Other important factors to consider are the status of patients typically targeted for testing (both their ability to provide reliable behavioral data and their pre-exposure hearing sensitivity), speed of the test and its analysis, cost of performing and interpreting the test, and availability of equipment.
Patient responsiveness can be determined, in part, by physician or nurse reports in the patient's medical chart. The ASHA 1994 guidelines recommend a full audiometric evaluation for patients who are alert and responsive. Objective measures of auditory status should be included in the Baseline Evaluation if there is a possibility that the patient will become less responsive over the course of treatment.
An abbreviated test battery is required for patients who tire easily or show limited responsiveness or awareness, such as difficulty identifying their location or purpose for being in the hospital. In order to reduce test time while maintaining high sensitivity to ototoxic hearing damage, a shortened protocol is recommended that targets for monitoring a range of frequencies near each patient's upper frequency limit of hearing. The reported hit rate for this protocol is approximately 90% in large groups of adult patients with ototoxic hearing changes observed using full-frequency testing.
A more complete Monitor Evaluation is necessary if hearing change is observed using the shortened protocol described above. Data obtained using a test-battery approach allow hearing changes to be verified, threshold shifts due to middle ear dysfunction to be ruled out, and the effect of hearing changes on speech recognition to be determined. Objective measures, such as evoked otoacoustic emissions (OAEs) and auditory brainstem response tests (ABRs) are particularly useful to include in the Monitor Evaluations test battery for children with limited attention spans.
There is a class of patients unable to provide reliable behavioral data that includes infants and non-responsive adults on the hospital ward. Objective tests must be used to monitor changes in auditory function in non-responsive patients. As described above, middle ear dysfunction must be ruled out in order to determine that any changes noted are likely due to changes in cochlear function. If middle ear function is normal and hearing is good, OAEs appear to be an excellent indicator of early ototoxic damage. However, abnormal middle ear function and baseline hearing loss greater than about 40 dB HL may preclude effective monitoring using OAEs. Use of ABR testing may be more appropriate in such cases.
Determining effective ototoxicity detection and monitoring strategies using objective measures of auditory function is an active area of research. However, there currently are no accepted protocols or criteria for ototoxic change using objective measures. Most reports in patients receiving ototoxic drugs have focused on ABR or OAE test sensitivity, in which sensitivity was defined as a clinically significant change in the value of the objective measure.
Test-retest variability in subjects not receiving ototoxic drugs has been used to provide criteria for a clinically significant response change and to estimate false positive rates. Such studies have been useful for developing potential objective protocols for ototoxicity, which need to be validated. Further research is needed comparing test performance for each objective test (i.e., its sensitivity and specificity) to a behavioral standard.
Acknowledgement: Work supported by the Department of Veterans Affairs Rehabilitation Research and Development Service (Grants C99-1794RA, C97-1256RA, E3239V, C3213R, and C02-2637R).