As a person with amyotrophic lateral sclerosis (ALS), I have lost the ability to speak, and have benefited from the care of excellent speech-language pathologists (SLPs) and the technology that is now available to give a voice to those who cannot speak. I hope that sharing my thoughts may give SLPs new strategies to help others with speech impairment and ALS.
The diagnosis of ALS came after I had conquered type 2 diabetes. After learning that I had diabetes in fall 2001, I was determined to lose weight, get fit, and beat the diabetes into submission. I successfully lost about 75 pounds and completed three marathons in the next two years, culminating with the Chicago Marathon in 2003.
One tool runners use to govern their training pace is to make sure that they can comfortably converse while running. In the spring of 2004, while training for my next marathon, I noticed that my speech while running was a bit garbled. I found this curious, but nothing more. By summer, the change had become pronounced and I had difficulty swallowing. I saw a neurologist, and after exhausting all other possibilities, he diagnosed my illness as ALS, or Lou Gehrig's disease.
ALS is a degenerative disease in which the motor neurons fail. In most cases, the motor neurons affecting limb muscles are the first to go, resulting in weakness, atrophy, and paralysis of the arms and legs. Eventually the muscles involving speech, swallowing, and respiration follow, and death results from respiratory failure. Most ALS patients do not have cognitive deficits.
In a smaller number of cases-about 25%-the muscles involved in speech and swallowing go first and limb paralysis happens later. This is known as bulbar-onset ALS, the form of ALS that I have.
My sister Gail Venable, an SLP, gave me an article on velocity of speech (Beukelman, Ball, & Pattee, 2004) as a measure of when to consult an augmentative and alternative communication (AAC) specialist. I began to test myself, and when my speech rate dropped to less than 125 words per minute, I called SLP Melanie Fried-Oken, an expert in AAC. We conducted an assessment and discussed strategies that I could use to help me be
understood and technologies to deploy when those strategies failed.
Roles of the SLP
Fried-Oken was a great help in two unexpected-but vital-areas. The first was helping me to understand the social implications of my inability to communicate. The second was identifying funding sources for equipment.
My model of an AAC specialist has five key roles:
- Technology consultant
- Reimbursement expert
I cannot stress strongly enough the importance of all five of those roles. They made the difference in equipping me with the tools I needed to cope with my onrushing disability. It is my hope that all current and future clinicians keep all five roles in mind. It is important not to leave gaps for patients to fall through and to keep other providers in the loop to complete the circle of care.
Paper and pen is the first, most obvious strategy. The one strategy I initially overlooked was the simple, elegant, and universal notepad and pen. A pen and paper is not a natural communication method and doesn't work in low-light situations, with groups, across a room, or on the telephone. As I lose the ability to use my hands, this avenue will be closed to me.
Telephone communication is a form of communication that we take for granted. Without a
technological solution, it is inaccessible to those who cannot speak. There are telecommunications equipment distribution programs and relay services in all states that provide assistive technology for people to access the telephone system. Some programs focus on people with hearing loss, but others also provide services for people with speech impairment. I use my voice synthesis system directly with a speakerphone or with a cell phone with Bluetooth headset. The telephone is perhaps my greatest source of frustration. Some conversations go very smoothly, but that is the rare exception. I often call busy doctor's offices, and it is frustrating when they hang up if I am too slow to talk. For outgoing calls, I have developed a way to require the answering party to respond, so they cannot hang up:
Answerer: "Customer service, may I help you?"
Me: "Are you the person who can help me with finding out the status of my order?"
Answerer: "Yes, sir. What is the purchase order number?"
Me: "I use a text-to-speech system to talk, so please be patient while I type. Can you understand me OK?"
The key is that the answering party has been forced to commit to the conversation at least long enough to realize that I am not a solicitor or crank caller. Incoming calls are more difficult. I use a pen-sized voice recorder to tell the caller that I am there and setting up my system to talk to them.
One-to-many communication takes two forms, prepared and conversational. For prepared speeches, the key is being able to start and stop the speech on demand. EZ Keys™ XP works quite well for prepared addresses. I can control the output sentence by sentence, and can interject comments by typing. For speeches with PowerPoint slides, I use TextAloud, which has the advantage of allowing me to pause and resume while my PowerPoint slides are on the screen.
One-to-one conversations are the easiest form of communication and have the advantage of the total commitment by both partners in the conversation. The challenge comes when there is strong emotional context to the conversation. The speech-generating system always has the same tone of voice with the same, sometimes slightly peculiar, intonation. I am learning to try to use facial expression and gesture to help with communication, and as much as possible, to maintain some eye contact while typing.
Physical positioning also is an issue when conversing. I feel uncomfortable when someone sits next to me or behind me and reads my words on the screen. If I have privacy, I can start typing ahead while my partner is still speaking. This restores a nearer-to-normal rate of conversation, and if my response turns out to be out of context or inappropriate, I can delete or correct the message. If they read as I type, it takes away my ability to get ahead, slows down the conversation, and at worst, catches me saying something I don't want to say. Fried-Oken suggested sitting at a 45-degree angle with my partner, where possible, so the PC screen is not a wall between us. I also have tried to lower the screen so that it is less of a barrier.
Group conversations are a great challenge because communication is fast and unstructured, and it's very difficult to participate when my pace is so much slower than the group. For the most part I tend not to speak in groups, unless attention naturally focuses on me or I am asked a question. In business meetings, this difficulty in participating in conversations can be a significant handicap. Noisy environments also present a challenge. It's surprising to me how loudly people speak in restaurants, bars, and even retail stores. If I use my external speakers (JBL On Tour, designed for music) at full volume with my laptop system, they are still not loud enough to be heard in those settings, although the volume seems very loud at home. I have begun to use a personal public address (PA) speaker (ChatterVox model 100), which gives me sufficient volume to be heard in public settings. In the absence of my PA speaker, I have resorted to positioning my computer screen so that others can read it, which has worked surprisingly well-except for the fact that I can't see my typographical errors, which causes some amusement and confusion. I have some difficulty retaining my sense of humor about typographical errors that result in mispronunciations. It may be a natural response for listeners to laugh at fumbles in speech, but I find these errors frustrating.
I have tried several systems and it was important to me to participate in the decision-making process regarding what technology to employ. I currently use a convertible tablet/notebook computer (Toshiba Tecra M4) with two speech-generating applications (NextUp Talker and TextAloud, both from www.nextup.com), as well as one dedicated device (Link Classic from Assistive Technology Inc.). I plan to use Dasher (www.inference.phy.cam.ac.uk/dasher/) with a mouse, headmouse or eye gaze system when I can no longer type. At present, I use the PC for communication when seated and a pen and notepad when I am on the move. Since I now use a power wheelchair nearly all the time, the laptop is my primary communication tool.
One of the most important issues for me is the voice itself. I have become a fan of concatenated speech systems that assemble phonemes from recorded human voices rather than synthesizing sounds. In my experience, listeners find concatenated speech voices more readily understandable and relate to me better when I use them. These were introduced to me by SLP Chris Gibbons at the local chapter of the ALS Association. I currently use AT&T Natural VoicesTM (the voice of Mike), which is the most natural-sounding human voice I've been able to obtain. I am soon going to be testing a speech engine from Loquendo, a concatenated speech system that allows more user control of expressive cues by use of punctuation. My dream system would allow me to vary not only the intonation of words or phrases, but the tone of voice to allow me to sound humble or assertive, sarcastic or sincere, happy or angry, sensitive or arrogant.
A Partnership for Communication
SLPs have become my close partners in assessing my needs; deciding on technologies; developing communications strategies; helping me understand the implications of my impairment on my role in family, work, and society; and getting access to funding for AAC systems. I have been fortunate to work with five SLPs over the last two years. I have an ongoing close working relationship with three clinicians (the other two were excellent resources for one-time needs related to swallowing assessment and strategies) who continue to be critical parts of my health care team. These clinicians have earned my greatest respect and admiration for the work they do.