January 1, 2013 Features

Unchained Mind

see also

The voice coming over the computer speakers sounded robotic. It was clearly the oddly paced output of a speech synthesizer, rather than a human voice. But there was something very special about this computerized voice.

These were the first speech sounds uttered by "Brian" after nine long years of silence due to the profound paralysis of locked-in syndrome. Patients with this cruel neurological condition lose voluntary movement completely but retain all sensation, consciousness and cognition. 

In Brian's case, a stroke occurred in his brain stem during surgery after a severe car crash and left him in this state. His verbal liberation was made possible by a brain-computer interface that nearly instantaneously translated his brain's electrical signals into an auditory signal approximating speech sounds. With a bit of practice trying to produce sounds while listening to his new computerized voice, he was able to control the synthesizer well enough to effectively produce vowels in a controlled laboratory experiment we conducted in partnership with Philip Kennedy of Neural Signals, Inc., in Duluth, Ga., a company Kennedy founded to develop implantable electrodes for BCIs.

Locked-in syndrome arises when a brain stem stroke or neurological disorder destroys neural pathways that carry voluntary movement commands from the cerebral cortex to the muscles. The syndrome's very name is terrifying, forcing one to consider being fully conscious but trapped in a completely immobile body with no means of communicating with the outside world. 

Some people with the syndrome maintain enough voluntary eye movement to use an eye tracker, a device that measures their eye movements with a camera, to control a computer with augmentative and alternative communication capabilities. Others can painstakingly transmit messages via a communication partner who reads out letters sequentially from a letter board until the patient blinks to indicate the appropriate letter, thereby slowly spelling out a message. Jean-Dominique Bauby, a former editor-in-chief of Elle magazine who became locked in after a stroke wrote an entire book this way. In the most severe cases, patients do not have even voluntary eye movement control, leaving them completely unable to communicate. Now, a new generation of brain-computer interface technology holds the promise of allowing even these patients to communicate once again, perhaps even speaking through a computer as Brian did. 

Invasive versus non-invasive 

Until recent years, primitive devices like the letter board were the only effective communication tools for many locked-in patients. The development of BCI technology is beginning to change that, and much more dramatic changes are on the horizon. In a BCI, electrical signals from the brain are routed to a computer, where they are translated into commands by an individualized decoder. Software on the BCI computer can allow the user to type messages, produce synthetic speech over a computer speaker or control external devices like robotic arms. 

BCIs can be divided into two main classes based on the type of brain signal measured, which in turn depends on the type of electrode used: Non-invasive BCIs use electrodes temporarily attached to the scalp, a technique referred to as electroencephalography, or EEG, whereas invasive BCIs use electrodes placed on the surface of the brain (electrocorticography, or ECoG) or inserted directly into brain tissue (microelectrode arrays). 

Non-invasive BCIs rely on electrical signals created by tens of thousands of synchronously firing neurons whose electrical outputs sum together. Only these large, aggregated signals are strong enough to travel through the skull to reach the EEG electrodes. The poor electrical properties of the skull degrade this signal, basically smearing together signals from different parts of the brain. As a result, EEG-based BCIs can only crudely detect what's going on in the user's brain. The most successful systems use robust visual evoked potentials, large peaks in the signal that occur during particular types of visual stimulation. 

One such potential is the P300, so-named because it is a positive electrical potential occurring approximately 300 milliseconds after a visual target appears in the visual field. In a P300 Speller, for example, the keys of an on-screen keyboard flash on and off in a semi-random pattern, with each key following a different on-off pattern. The user views this display while focusing on the key he or she would like to choose. Each time the target key appears, a P300 wave is generated. Based on the pattern of P300 waves emanating from the user's brain, the BCI determines which key the user wants to choose. BCI2000, an EEG-based system developed by researchers at the Wadsworth Center in Albany, N.Y., has been successfully installed in a handful of homes of locked-in patients, who use it primarily for communication via a P300 Speller (for more on the P300, see the ASHA Leader Online exclusive at www.asha.org/leader).

Although impressive and life-changing for some users, non-invasive BCIs suffer from a serious limitation: It takes a long time to make each "control decision." For example, it can take five to 10 seconds for each key choice in a P300 Speller, rendering natural face-to-face conversations impossible due to the slow typing rate. Electrical signals measured at the scalp simply do not provide enough information about brain function to allow the fast control decisions needed to produce speech at a near-conversational rate or to control robotic limbs. The only way to overcome this limitation is to measure brain signals with electrodes that are inside the skull, closer to the neurons generating the detailed electrical patterns that represent our thoughts.

The invasive electrode technology that most directly resembles EEG is ECoG, which involves grids of electrodes placed directly on the surface of the brain. ECoG is most commonly used as a mapping technology for characterizing seizure patterns in epilepsy patients prior to neurosurgery. Because the negative electrical effects of the skull are avoided, ECoG electrodes receive a much finer-grained pattern of electrical signals than EEG electrodes, and "grids" involving 96 electrodes or more are now used at a number of hospitals throughout the country. ECoG use in BCIs is a relatively recent phenomenon, and current ECoG-based BCIs are limited to temporary use by epilepsy patients as the electrode arrays are removed prior to corrective neurosurgery. Nonetheless, impressive ECoG-based control of computer cursors and P300 Spellers has been demonstrated, and very recent studies have also shown that ECoG can be used to identify speech sounds during imagined speech, underscoring the promise of this technology for restoring speech and other capacities to individuals with locked-in syndrome.

Electrodes implanted directly in the brain tissue can collect still finer-grained information. Microelectrode arrays implanted in the cerebral cortex have produced the most impressive results. Because their recording tips are nestled very near cell bodies in the cerebral cortex, microelectrode arrays are capable of recording electrical spikes from individual neurons, providing details of neural processing not accessible with even the most advanced ECoG arrays.

Most microelectrode arrays for BCI use are implanted in the motor cortex, the region concerned with controlling body movements. Numerous invasive BCI studies have shown that the motor cortex remains functional in locked-in patients even after years without voluntary movement capabilities. Critically, the motor cortex also remains plastic; in particular, it remains capable of learning to control new movements such as those of a computer cursor, robotic arm or speech synthesizer. These capabilities are spared because the damaged neural pathways in locked-in syndrome are downstream of the motor cortex, preventing motor cortical movement commands from reaching the muscles but sparing the parts of the brain responsible for cognition and action planning. 

Since the advent of microelectrode array-based BCIs in the 1990s, most studies have involved the control of a computer cursor or a real or virtual (computer-simulated) robotic arm or hand. Microelectrode arrays in these studies are typically implanted into the hand/arm region of the motor cortex, the region that normally controls these types of movement. In a state-of-the-art microelectrode array-based BCI, a user can learn to control movement of a computer cursor accurately with just minutes of practice. In a recent demonstration from the BrainGate clinical trial, headed by Leigh Hochberg and John Donoghue of Brown University and published in the journal Nature in 2012, a woman with locked-in syndrome was able to pick up and drink from a bottle using a complex robotic arm/hand she controlled using a microelectrode array-based BCI. 

Restoring speech

Four years have passed since the real-time speech BCI study described in the opening paragraph, which was part of a clinical trial headed by Philip Kennedy and published in the online journal PLOS ONE in 2009 (www.plosone.org). The study involved a two-channel neurotrophic electrode that was surgically implanted four years earlier into a motor cortical area associated with speech, the first and only speech-motor cortex implantation for BCI. BCI technology for human implantation has improved rapidly since then. As mentioned earlier, state-of-the-art BCIs now use 96 or more neural information channels, which in turn leads to much better control by the user. 

Speech synthesizer technology has also improved. We now have a synthesizer capable of producing clearly intelligible consonants and vowels with a simple two-dimensional control scheme that is as easy to control as the vowel-only synthesizer from our prior study. The synthesizer is designed to work with microelectrode arrays implanted in either the hand or speech area of the motor cortex. The latter is preferable from a "naturalness of control" standpoint because, put simply, the speech motor cortex is more specialized for controlling speech movements. But no current BCI clinical trials are implanting in the speech motor cortex, instead implanting microelectrode arrays in the hand/arm area. To accommodate this system, the synthesizer provides a visual interface that allows speech to be generated by controlling movements of a computer cursor.

What this all means is that the technologies needed for restoring near-conversational speech capabilities to people with locked-in syndrome via a BCI are now in place. The primary hurdles remaining for getting these devices into the homes of patients are U.S. Food and Drug Administration approval and clinical testing. 

Science (non)fiction future

A number of research teams, including our own, are conducting multi-array BCI studies involving implantation of several different cortical areas simultaneously. Imagine, for example, that one microelectrode array is implanted in the area of the motor cortex associated with controlling a robotic arm, while another is placed in the speech area for controlling a speech synthesizer. A third microelectrode array might be implanted in the leg area for control of a wheelchair. In fact, entire exoskeletons for BCI use are already under construction in several labs across the world. The scenario of being given a new robotic body and voice—imagine a nonlethal version of the reconstructed police officer in "Robocop"—is on the verge of moving from science fiction to reality. Needless to say, this development will be world-changing for these patients.

The social implications of BCI technology, however, extend far beyond rehabilitation. Although it is hard to imagine a nefarious use for a speech BCI, it is not hard to imagine the use of robotic BCI technologies as weapons. We've seen it depicted in a number of sci-fi books and movies, after all. 

Another issue concerns the potential for human augmentation, or able-bodied people electing to receive BCI implants. Some futurists are already salivating at the prospect of being implanted with electrodes to obtain super-human powers via BCIs. These powers could extend well beyond robotic body parts. Current research into memory-related BCIs could lead to the possibility of enhancing one's memory capacity with microchips connected into the brain's circuitry, for example. 

Ethicists are already beginning to tackle these troubling social and moral issues, and much more discussion is certain to come as BCIs start rolling out of labs and into public life. For now, however, one thing seems certain: In the future, people with severe paralysis will achieve far higher levels of function than many of us ever thought possible, including speech restoration for patients like Brian, thanks to BCI technology.

Frank H. Guenther, PhD, is professor of speech, language, and hearing sciences and biomedical engineering at Boston University. guenther@bu.edu

Jonathan S. Brumberg, PhD, is assistant professor of speech-language-hearing at the University of Kansas. brumberg@ku.edu

cite as: Guenther, F. H.  & Brumberg, J. S. (2013, January 01). Unchained Mind. The ASHA Leader.


Guenther, F.H., Brumberg, J.S., Wright, E.J., Nieto-Castanon, A., Tourville, J.A., Panko, M., Law, R....  Kennedy, P.R. (2009). A wireless brain-machine interface for real-time speech synthesis. PLOS ONE, 4(12), e8218+.

Hochberg, L., Bacher, D., Jarosiewicz, B., Masse, N., Simeral, J. D., Vogel, J., Haddadin, S.... Donoghue, J.P. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485(7398), 372–375.

Wolpaw, J., McFarland, D., Vaughan, T., and Schalk, G. (2003). The Wadsworth Center brain-computer interface (BCI) research and development program. IEEE Transactions on Neural Systems & Rehabilitation Engineering, 11, 204–207.


Advertise With UsAdvertisement