We take so much for granted. When you know and love people who use augmentative and alternative communication (AAC), it makes you more acutely aware of how much we take communication for granted. And often, when you know and love AAC users (or if you are an AAC user), you don’t assume that everyone is able to talk. We know that sometimes people struggle to communicate. We know that some people have beautifully formed words in their minds that they fight to express.
In addition to the frustration that accompanies knowing what you want to say and not being able to say it, people often assume that AAC users have cognitive impairments.
So now, if you’re the person who’s trying to get your message across through AAC, you have to figure out a way to do that so that someone can understand you. AND you have to do it quickly and accurately so that listeners will believe you’re competent.
Not only do you have the frustration of having to communicate in an “alternative” way and the stigma of people questioning your intelligence, there is the added pressure of having to do it quickly to keep up with conversation so that your communication is relevant and on topic.
But that’s still not enough.
We all make mistakes constantly in our speech and language. We say the wrong word, shake our heads and correct ourselves. We get information wrong and immediately say, “Oh never mind. I don’t know what I’m thinking today.” We watch our listener for cues that our messages are being understood, and if we are not understood, we clarify. We are impressively adept at communication repair.
When you’re using AAC, and you say the wrong word, many of the cues we use in verbal speech are missing. The AAC user isn’t able to stop mid-word and correct the mistake. They’re not able to change their tone of voice to emphasize the word they meant to say. Even the nonverbal facial cues like shaking your head or the slight “ugh” can be challenging when your motor movements don’t always cooperate. So AAC users find different ways.
I’m sharing this video of Jess (with her and her mom’s gracious permission of course) because in this 47 second video, she makes 2 clear mis-hits and handles them very differently based on my reaction.
Last weekend, Jess and I went to lunch and the movies and then she came back to my house (that’s important in the context of this video clip).
When Jess was first learning to use Speak for Yourself, her fine motor and visual issues affected her accuracy. She would hit buttons around the area she was targeting and listen for the word she was trying to say. When she would get it, she would make direct eye contact as if to say, “Did you get that? That’s what I meant.” As she’s progressed, there are fewer mis-hits and she puts words together now so that signaling is not as reliable, and most of the time not necessary. For the most part now, she means what she says.:)
In this video clip, Jess’s communication repair skills are so impressive. We’re in the car, and I’m not looking at her because I’m driving. So as a listener, I’m relying on auditory output and whatever gestures I can see in peripheral vision (as you would with any passenger).
I ask Jess where she’d like to go for lunch and she says “up to eyeglasses store.” She laughs but then when she realizes I’m trying to figure out a place for lunch that’s by an eyeglass store, she says, “Accidentally. Ice cream shop.”
Remember, we’re driving in a car, and if you’ve ever tried to do anything that requires fine motor precision like signing a birthday card or putting on mascara in a moving car, you know it’s tricky.
When she says “eyeglasses store” and I try to figure out where she is talking about, she says accidentally because it was a mis-hit and she didn’t want me to go down that path.
When she says “glitter” by accident and I don’t say anything, she just corrects the mis-hit and says “Italian.” When I watched the video, I noticed that “glitter” is in a very similar location to “Italian” on the secondary screen. When I pulled out my iPad and looked at the location in Speak for Yourself, the buttons are actually in the exact location on the respective secondary screens.
When she says, “Italian,” I say, “How about Pizzeria Uno?” as she says, “visited Les Miserables.” I didn’t realize it in the video, but we parked in Pizzeria Uno before we were going to see Les Miserables. I didn’t realize it until we pulled into the parking lot of Pizzeria Uno, which shares a parking lot with On the Border. When we went to see Les Miserables, we were going to eat at Pizzeria Uno but ended up eating at On the Border because Jess saw it across the parking lot and wanted to go there. I’m very flexible with restaurant decisions.
A word about phonology and AAC:
When her mom watched the video, she said that Jess might have meant she would be “less miserable” (since that’s how the device pronounced it, which I could have fixed if I wasn’t driving). It’s important to mention the use of phonology in AAC (which is exactly what Jess’s mom meant. Jess’s mom, Mary, writes the You Don’t Say AAC blog). People using AAC will sometimes use words phonologically, so even if the word itself doesn’t make sense in a given context, it sounds enough like another word(s) that make perfect sense in that context.
Fortunately, later that evening, Jess used phonology in her communication so that I can give you a perfect example. (I’m kidding of course. I’m reasonably sure that wasn’t her motivation at all). After the movie, we went back to my house and we were sitting across from each other on the floor going through my DVDs. She held up my Les Miserables DVD and I said, “Your mom didn’t care for that one.” She nodded, picked up her device and said, “You care it.” I said, “You’re right, I love it!” Then I thought, I’ve never seen her use the word “care.” When I looked at her message window, she had actually said, “You carrot.”
Since we’re talking about communication repair, if your child or student says something and the actual word doesn’t make sense, try saying it to see if it makes sense phonologically. Does it sound like something else? Some children who use AAC are not yet reading, and just like toddlers who are learning to talk, they rely on how words sound. At times, students and adults who are literate will also use words that sound like what they want to say if they’re able to access it more quickly. When we are speaking, no one can tell if you say the word “here” or “hear” based on only our auditory output. And even in our written language, we have issues with “your” and “you’re” and “there”, “they’re” and “their.” In a language of homophones, homonyms and multi-meaning words, don’t be limited strictly by spelling. Listen for the voice output and the meaning of the message.
Everyday, I am impressed by the creativity and motivation of the individuals I know who use AAC. It amazes me that the cognitive skills of AAC users are questioned and even worse, doubted. There is exquisite skill in the ability to use a device to accomplish the communication functions that verbal people accomplish with so many more tools. Even with our tone of voice, nonverbal cues, and ability to quickly revise what we say, there are misunderstandings.
Presuming competence for people who use AAC to communicate isn’t assuming that EVERYTHING they say is intentional. It also means realizing that sometimes they make mistakes in their communication that have nothing to do with their ability or intelligence. Mistakes are part of human communication. AAC users need the space for mis-hits and the time and the means to revise their message. We all want to be understood.