These 11 up-and-coming technologies could revolutionize diagnosis and treatment of speech, language and hearing disorders.
A woman's tinnitus begins bothering her at work. She hits a button on her smartphone, activating an implanted device that stimulates her vagus nerve. The ringing stops and she returns to work.
A musician who's spent years playing guitar next to blaring amplifiers can no longer hear his wife or children calling him because of severely damaged cochlear hair cells. Surgeons implant new stem cells to replace the old ones, and he once again can hear others calling his name.
A soldier whose convoy was hit by an explosion in Afghanistan shows no signs of traumatic brain injury (TBI) on the usual sequence of brain scans. But an advanced new kind of imaging reveals a surefire sign of TBI: damage to his blood vessels. With this diagnosis, the soldier can now get the treatment he needs.
New forms of imaging and other technologies are being investigated or used to diagnose some communication disorders, while brain stimulation and smart phones show promise in helping to manage or treat others. Here we take a closer look at 11 such technologies that could change the future of communication sciences and disorders.
1. A Hair Cell Away From Hearing Repair?
When we scrape or cut our skin, new cells rapidly replace the damaged ones. If this were also true when we damage our inner ear, we could go to even the loudest rock concert and have perfectly restored hearing in only a few days.
What may sound like fantasy could become reality through technological advancements … and research on birds. Most hearing loss in people is sensorineural—resulting from degeneration of cochlear hair cells or spiral ganglion neurons, the first cells in the auditory pathway—and most sensorineural hearing loss is permanent, because we cannot regenerate cochlear hair cells (or neurons) when they are lost. Hearing aids and cochlear implants treat the symptom—hearing loss—but not the missing critical links that cause the problem. Although these two technologies provide communication improvements, their downfall becomes evident when communication is challenged in our fast-moving, noisy society.
Birds, however, are different—they regenerate lost hair cells in a few weeks. Research is now underway to determine how to promote hair cell replacement, first in laboratory mammals and then in humans, using two general approaches. First, investigators seek to identify molecules that can push cochlear progenitor cells to form new hair cells. This premise is based on the assumption that cells in an injured cochlea can form new hair cells with proper treatments. Some labs derive ideas from birds and other animals that regenerate hair cells spontaneously. Others follow clues provided by hair cell development.
In a second approach, investigators are examining the potential of embryonic, neural, or induced pluripotent stem cells from outside the ear to form hair cells when transplanted into the cochlea. These investigative approaches are not mutually exclusive, and the knowledge from each benefits the other.
Hair cell regeneration as a treatment for sensorineural hearing loss is still years away, but investigators have made important strides in demonstrating the feasibility of applying genetic and transplant therapies to improve hearing. Stay tuned!
JENNIFER STONE, PHD, University of Washington, Department of Otolaryngology-Head and Neck Surgery
EDWIN RUBEL, PHD, University of Washington, Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery
2. Laryngeal Imaging Goes Sharper, Faster, Deeper
To diagnose a voice disorder, clinicians use oral or nasal endoscopic laryngeal imaging to peer at the vocal cords as they vibrate. And this technology keeps getting smarter, most recently by becoming:
- Sharper. Videostroboscopy has continued to be the mainstay of clinical laryngeal imaging since the 1980s. Basic stroboscopic technology is now being coupled with high-definition video sensors to significantly improve image quality and, through increased spatial resolution, potentially enhance assessment of vocal fold tissue health and function. Videostroboscopy, however, still has limitations that other developing technologies are addressing.
- Faster. New high-speed/high-resolution digital video cameras with unprecedented increases in light sensitivity and frame rates are capturing vocal cord actions that can't be seen with video-stroboscopy alone. These actions include true cycle-to-cycle details of vocal fold vibration, as well as non-periodic phenomena associated with more severe types of dysphonia, voice breaks, and the beginning and end of voiced sounds. Researchers also are working to develop better tools to parse the huge amount of data collected during high-speed imaging.
- Deeper. Current clinical laryngeal imaging uses endoscopes capable of providing only uncalibrated two-dimensional views of the vocal folds' surfaces. But researchers are developing new laser-based technologies such as depth-kymography, which shows precise size calibration of endoscopic images and tracks the important vertical displacement of vocal fold vibration. Another laser-based approach, dynamic optical coherence tomography, can capture dynamic cross-sectional images of vibrating vocal fold tissue to illustrate more clearly how disorders or medical treatments, such as injections, affect its functioning.
ROBERT E. HILLMAN, PHD, MGH Institute of Health Professions & Center for Laryngeal Surgery and Voice Rehabilitation, Boston
3. Smarter Phones for Hearing Impairment
Wireless devices keep getting smaller, with more powerful processors and operating systems. As the technology improves, smartphone manufacturers are moving beyond simple hearing-aid compatibility. They're designing phones that interact better with hearing aids and cochlear implants, can warn users with hearing impairment of nearby events and even exchange sound profiles with other smartphone users.
Since 2003, the U.S. Federal Communications Commission has required wireless handset manufacturers to ensure their products avoid noise and feedback when used with hearing aids. But the 21st Century Communications and Video Accessibility Act of 2010 upped the ante, amending the law to allow people with disabilities greater access to modern devices. The resulting surge of new features and technologies hasn't yet subsided.
Apple, Inc., plans numerous innovations for its customers with hearing impairment. iPhones may soon be able to detect and warn a user with hearing loss of sounds in the user's immediate surroundings—such as a doorbell or a smoke detector's alarm—with a visible cue on the iPhone's screen. According to patents filed in 2011, Apple plans to create phones that can remotely adjust a hearing aid's settings—and even transmit those settings to a friend's iPhone—to achieve optimal audibility in noisy locations.
And the iPhone's newest operating system, iOS6, is designed to work with "Made for iPhone" hearing aids—aids specifically designed for direct, energy-efficient, wireless connections. Danish hearing aid and headset manufacturer GN Store Nord has already signed on to create these branded devices, and others will soon follow.
Motorola's DROID phone includes features to improve accessibility, allowing users to select vibrations, tones, or even a speech synthesizer to accompany navigation of device menus and buttons. Other manufacturers—including Samsung, Kyocera, and Clarity—have produced phones with high compatibility ratings for hearing-aid users, and the demand for such phones continues to swell.
Regardless of brand, the FCC and manufacturers alike recommend that users always "try before you buy" to ensure the best match between hearing aids or cochlear implants and smartphone.
MATTHEW CUTTER, The ASHA Leader
4. Biofeedback for Acquired Apraxia of Speech
Will biofeedback help people with acquired apraxia of speech improve their motor planning abilities? In this advancing area of technology, electropalatography appears to be a promising treatment tool for people with this disorder.
EPG is a tool that displays and records tongue contact with the palate during speech production. The speaker wears a pseudopalate, a custom-made acrylic plate—like a dental retainer—embedded with electrodes. The pseudopalate fits tightly against the upper palate, and sends electronic signals of tongue-to-palate contacts to an external processor and then to a laptop or desktop computer. With advances in technology, the goal is for the system to become wireless.
Using EPG, a clinician can display, record and store the timing and location of tongue-to-palate contacts, and also provide visual biofeedback to the speaker. With a split screen, the system can display the contact patterns of two people—the patient and clinician—and the patient can try to emulate the clinician's patterns for treatment targets.
With no limit to the length of recorded speech production, treatment targets can vary in length—words, phrases or sentences—based on a patient's treatment goals. The clinician or patient can review recorded productions in real time, slow motion or stop motion, and can print out views of tongue-to-palate contact or save them in a file.
EPG has been used to treat a number of speech disorders—such as those associated with cleft palate, hearing impairment and developmental articulatory errors—but only a few case studies have used EPG to treat people with acquired apraxia of speech. Its potential to give people with the disorder additional information about articulation, however, makes it a promising treatment, because it provides visual and tactile feedback for therapy targets, especially for speech sounds that are less visible.
SHANNON C. MAUSZYCKI, PHD, CCC-SLP, VA Salt Lake City Health Care System; Department of Communication Sciences and Disorders, University of Utah
5. Jolts of Hearing to the Brain Stem
Cochlear implants have revolutionized treatment for deafness, providing functional hearing to many. But not all. Electrically stimulating the cochlea does nothing for people who have no auditory nerve.
Enter the auditory brain stem implant, similar to a cochlear implant, except the electrode lives on the first auditory relay station in the brain stem, the cochlear nucleus. The ABI's 21 tiny electrode contacts yield distinct pitch sensations, allowing recipients to detect and discriminate sounds. This bolsters face-to-face communication up to 30 percent.
In the United States, ABI is approved for use among adults who have lost the auditory nerve to neurofibromatosis type 2—a tumor-causing genetic defect requiring surgery that severs the nerve. Even with ABIs, many of these patients still cannot identify words or sentences, though research results are mixed on this point. A 2007 study by Behr and colleagues found that some patients in this group recognize speech just as adeptly as cochlear implant patients.
But in Europe, where the ABI is applied more broadly, studies indicate it appears significantly more helpful to patients who lost their auditory nerve for other reasons, including temporal bone fracture, congenital aplasia of the cochlea and/or nerve, and severe ossification from congenital or post-meningitic growth. About half of these patients have sentence understanding of more than 50 percent due to the ABI alone, without speechreading. Some of them understand speech as well as the best-performing cochlear implant patients.
In Europe, ABIs are used in children with congenital malformations of the cochlea and/or the absence of an auditory nerve. Many of these children show auditory and speech development comparable to that of children with cochlear implants.
Overall, excellent speech understanding is possible with an ABI. Poorer performance in neurofibromatosis type 2 patients may stem from damage to the cochlear nucleus or, in other patients, trauma to the auditory nerve. New studies are probing the causes of performance differences to pinpoint who will most likely reap the biggest ABI benefit.
ROBERT V. SHANNON, PHD, House Research Institute, Los Angeles
6. Direct Current Stimulation to Treat Aphasia
Given the potentially devastating effects of aphasia on speech, it's no surprise that two up-and-coming brain stimulation techniques have garnered much attention for treating the disorder. One, transcranial magnetic stimulation, uses a magnetic field to deliver stimulation that excites neurons (see opposite page). The other, transcranial direct current stimulation, does not directly induce neural firing but appears to have either excitatory or inhibitory effects on localized brain function.
tDCS induces a low electrical current between two electrodes placed on the scalp: an anodal electrode, which is thought to stimulate underlying neural tissue, and a cathodal electrode, which is thought to inhibit neuronal activity. In a pair of studies, our research group showed that anodal stimulation targeting left brain regions that remain intact after left hemisphere stroke can potentially enhance the effect of behavioral aphasia therapy. In these studies, a total of 18 patients received training on computerized picture-word matching during either anodal stimulation treatment or sham stimulation.
In both studies, the treatment group performed better than the sham group: Patients in the treatment group were able to name more items or were faster at naming after one week of behavioral treatment coupled with anodal stimulation.
Although the effects of tDCS are not as immediately apparent as with transcranial magnetic stimulation, tDCS is particularly attractive because it has minimal side effects and has never been proven to cause seizures—a greater concern with transcranial magnetic stimulation.
However, effects of tDCS are far more diffuse than those of transcranial magnetic stimulation, making it difficult to precisely target areas of the cortex. It appears that transcranial magnetic stimulation directly improves language processing, whereas tDCS is probably not effective unless it is coupled with behavioral aphasia treatment. That is, tDCS should be viewed as something that enhances, rather than replaces, aphasia therapy. As is the case with transcranial magnetic stimulation, more work is needed to verify the usefulness of tDCS and to determine optimal dosages to administer and brain regions to target. Our group continues to investigate these questions.
JULIUS FRIDRIKSSON, PHD, CCC-SLP, Department of Communication Sciences and Disorders, Medical University of South Carolina
7. An Electromagnet With a Speech Benefit
Transcranial magnetic stimulation is being used to treat everything from depression and headaches to communication disorders, including aphasia, articulation problems in Parkinson's disease and tinnitus. During TMS treatment, the clinician places a coil—the electromagnet—on the patient's head, over targeted brain regions. The coil generates a magnetic field that induces electrical current in the brain, limited to the cortical surface. This, in turn, stimulates neurons.
Repetitive TMS, in which the magnetic field is pulsed on and off over a period of time, has shown particular promise with communication disorders. However, it does not work for everyone, and its treatment effectiveness varies based on factors such as idiosyncratic variation in neural organization and how the stimulation is delivered—for example, whether the coil is tilted or lying flat on the head.
In research on aphasia, some studies suggest that for repetitive TMS to be most effective, the target brain region should be determined case by case. This determination can be done via functional magnetic resonance imaging while the person performs a speech-language task. In speech decline related to Parkinson's, research results for TMS treatment are more mixed: Although one study found no treatment benefits, others found that it improved various aspects of speech production, such as speech intelligibility and intensity.
Researchers also continue to investigate which brain area can best counteract tinnitus when stimulated. One school of thought holds that the best approach is applying low-frequency repetitive TMS to the parietal cortex because it keeps people from selectively attending to their tinnitus.
Through future studies, researchers hope better understand the characteristics of people who respond favorably to TMS, versus those who don't. They also want to determine whether combining TMS with behavioral treatments is more effective than either approach alone. And they aim to determine the duration of TMS effects. Although improvements in some people's speech and language have been shown to last for months, and even years, it is still not clear whether some clients may need "booster" sessions.
BRIDGET MURRAY LAW, The ASHA Leader, in consultation with
LINDA I. SHUSTER, PHD, CCC-SLP, Department of Speech Pathology and Audiology, Center for Advanced Imaging, West Virginia University
8. Implant Cure for Tinnitus?
A person suffering from debilitating tinnitus decides it's time to repeat the process that provided past relief for the ringing in her ears. She activates her permanent biomechanical implant, which stimulates the vagus nerve, while she listens to a sequence of tones in specific frequencies. With a steady regimen, the overexcited neurons in her auditory cortex return to a normal, resting state. Her tinnitus relieved, she shuts down the implant and returns to work.
What sounds like a scene from a science fiction movie may soon be reality, thanks to the efforts of a research team based at the Kilgard Brain Plasticity Laboratory at the University of Texas at Dallas. In 2011, a team led by Michael Kilgard and Robert Rennaker successfully reduced tinnitus symptoms in rats. They stimulated the rats' vagus nerves using a novel device while playing audible tones different from the annoying tinnitus tone. Over time, the rats' brains were trained to ignore the tinnitus tone, returning hearing to normal.
Now Kilgard has moved to the next stage: humans. Armed with the fully implantable Serenity™ device—manufactured by MicroTransponder, Inc., a Dallas-based medical device company—Kilgard has completed a 10-patient tinnitus clinical trial in Belgium. Further studies in the European Union and United States are planned for 2013.
Kilgard recently presented the results of the Belgium study at the Tinnitus Research Initiative conference. "Many of the patients showed dramatic and long-lasting improvements in their tinnitus after vagus nerve stimulation was paired with tones," he said.
Perhaps surprising is the involvement of the vagus nerve (the so-called "wandering nerve") that originates in the brain stem and stretches as far as the abdomen, with a more extensive distribution than any other cranial nerve. The vagus nerve is truly multi-talented, handling numerous involuntary body functions, including heart rate, digestion and reflex responses. In its wanderings through the body, the vagus nerve contacts the heart, lungs, larynx, stomach, intestines—and the ears. As Kilgard explained, "Stimulation of the vagus nerve sends a signal to the brain that tells neurons to pay attention because something interesting is happening."
According to the American Tinnitus Association, approximately 2 million Americans suffer from severe tinnitus. If Kilgard and MicroTransponder, Inc., are successful, relief may be only a bio-implant away.
MATTHEW CUTTER, The ASHA Leader
9. Revealing the Brain's Language Pathways
Neuroimaging advances have shed new light on the seat of language in the brain. Magnetic resonance imaging has always been an important tool for evaluating the cortical areas affected by injury. By looking at a patient's scan, we can assess the extent and the location of the stroke, tumor or degenerative process and make more accurate predictions as to the patient's prognosis. Now, we are using this technology to visualize the fiber pathways that connect these areas and enable them to communicate with each other.
My colleage And Turken and I have been using diffusion MRI to explore these pathways that help support speech and language. Until recently, it was thought that the arcuate fasciculus was the only important fiber bundle for language, but now we know there are many other equally critical pathways. For instance, a fiber tract known as the inferior occipital-frontal fasciculus appears to play a key role in language comprehension, especially for sentences. Like the arcuate fasciculus, this tract connects the temporal and frontal lobes of the brain, but this fiber pathway runs down instead of up through the temporal lobe, connecting other temporal lobe areas along the way.
This information means if we see a patient with a lesion running deep into the fibers of the temporal lobe, we know that patient might have a long-term comprehension deficit requiring additional treatment. Similarly, a patient with injury to the superior longitudinal fasciculus will have difficulty passing linguistic information forward to motor speech mechanisms in the frontal lobe. These patients present with severe production deficits that render them unable to convey even basic needs.
Diffusion MRI allows us to see the full extent of the injury to these important fiber pathways and learn the importance of these fibers in supporting speech and language. We can use this information to help clinicians predict recovery in brain-injured patients and help guide patient-management decisions.
NINA F. DRONKERS, PHD, Center for Aphasia & Related Disorders, VA Northern California Health Care System, University of California at Davis
10. Giving Synthetic Voices a Personal Touch
Talking machines are everywhere, it seems: cars, appliances, kiosks, mobile phones, call centers. But they still lack the naturalness and individual identity of the human voice.
This lack of individual vocal identity is particularly noticeable in assistive communication devices, which are meant to be an extension of the AAC user. For example, it's not uncommon for several people in a classroom or office setting to use the same synthetic voice or even for a child to use the same voice his or her entire life. This lack of personalization can hamper people's use of some technologies and even their social integration.
The VocaliD project (for Vocal Identity), a collaboration between the Communication Analysis and Design Laboratory at Northeastern University and the Speech Research Laboratory at the Nemours Alfred I. duPont Hospital for Children funded by the National Science Foundation, is developing personalized synthetic voices for users with profoundly impoverished speech motor control. Current techniques such as voice banking, recording one's natural voice in anticipation of losing one's voice, and voice conversion—producing words from a source speaker to a target speaker—rely on a large body of intelligible speech from the target speaker. Thus, they're not feasible for this application. Based on previous research findings that even people with severe speech motor impairments can vary prosodic features, VocaliD leverages pitch, loudness, rate and vocal quality—features that are preserved in the speaker's residual vocalizations and are ideal for personalization.
In our approach, a client's identity-bearing cues are fused with a surrogate talker's to generate a personalized, yet intelligible synthetic voice. The VocaliD process requires only a short vowel-like production from the client and an inventory of intelligible speech from the surrogate speaker. Listener studies have verified that the personalized voices are highly intelligible and convey the user's vocal identity. We are now looking into filtering out subtle elements of the surrogate talker's vocal identity that interfere with the client's vocal identity. Our ultimate goal is to afford users of synthetic speech the same ownership and individuality as the natural voice.
RUPAL PATEL, PHD, CCC-SLP, associate professor, Department of Speech Language Pathology and Audiology, Northeastern University, Boston
11. A New Way to Look at Traumatic Brain Injury
Imaging offers powerful insights into the damage done by traumatic brain injury, and now a new advance called susceptibility weighted imaging offers an especially detailed picture. Commonly used imaging techniques such as magnetic resonance imaging are often unable to show damage to the brain's network of blood vessels in mild traumatic brain injury. Damage to vessels that leads to microbleeds can be a hallmark of diffuse axonal injury, a particularly devastating form of brain injury in which large areas of white matter are affected. But damage can also occur locally as shown recently by advances in SWI.
SWI picks up on cerebral microbleeding, which is an indication of damage to both axons and small blood vessels, since these two are intertwined. The problem could present as bleeding, severe reductions in deoxyhemoglobin, impaired neural vascular tone, or reduced blood flow and blood volume.
The brain's veins are more vulnerable to the same force of impact than are arteries, which are stronger, though smaller. Injury to the most vulnerable veins on the cortical surface can cause cascading damage to those draining into them. A blow to the brain can also trigger shearing at the junction of smaller and larger veins deeper in the white matter.
SWI can identify all these types of damage: hemorrhaging, thrombosis and loss of local oxygen saturation in the medullary and pial veins in the brain. This type of venous damage may be seen in mild traumatic brain injury and may explain why patients whose brains appear unaffected on other types of imaging demonstrate neurocognitive problems. These patients may benefit from imaging with SWI to see if they may have sustained vascular or tissue damage and to support the diagnosis, which in many cases of mild traumatic brain injury may be questioned.
E. M. HAACKE, PHD, Department of Radiology, Wayne State University