Last month, we attended and exhibited at the Assistive Technology Industry Association (ATIA) conference in Orlando, Florida. One of the reasons that the ATIA conference is my favorite – besides the “work trip” to sunny Florida in the middle of the cold Northeastern winter – is that the exhibit hall hours and sessions are coordinated. This means that exhibitors are able to attend sessions when the exhibit hall is closed. It also means that attendees don’t have to miss sessions to visit the exhibit hall. It allows for longer, high quality booth conversations with professionals around the world who have a higher than average level of Augmentative and Alternative Communication (AAC) knowledge and experience. They also serve wine, beer, and appetizers in the exhibit hall which creates a very casual atmosphere. Its’s a different dynamic than any other conference when someone sets down their glass of wine to check out Speak for Yourself.
If you’re NOT looking for ways to build literacy into your core vocabulary instruction, let this post be a gentle nudge in that direction. Presume competence for your children or clients in communication and also in literacy. Expect that they will learn to read and write and provide the support to give them the opportunity.
Wait…if you’re going to stop reading because you don’t think your child is ready for literacy yet, please continue! There are things you can do, even if your child is an AAC beginner!
As I took notes and pictures of slides during the presentation, I made a note to myself that Speak for Yourself could easily be used with the Project Core goals. It could actually be used pretty easily with any AAC language system that has a strong core vocabulary. I believe that is the intention of the Project Core team. I LOVE when presentations can be easily converted to clinical applications!
There was a lot of information covered in the presentation, and everyone is busy, so here is my idea. I am going to write a post for the next five weeks (posted on Monday morning beginning February 27th, 2017) that will discuss one of the Project Core modules for emergent literacy and give some examples of how it can be used with the Speak for Yourself app. (If you’re using different but robust AAC, the information still applies.:) By the end of the 5 weeks, you’ll figure out where your learners are in terms of their literacy abilities. You can target those areas and then be ready to take the next steps!
For next week, let’s start with non-instructional routines!
Speak for Yourself main screen with Project Core words open.
During the short hour and a half presentation, the presenters covered five emergent literacy instructional routines. Prior to that though, they discussed how they had guided support personnel in the school to start incorporating more core vocabulary modeling and overall AAC use by looking at non-instructional routines.
First, you may want to print a poster of the main screen with the Project Core words opened to use for aided language input opportunities.
Here’s an example of how you can get started during non-instructional routines at school (the presenters asked the teaching assistants in the school to do this):
Choose a non-instructional routine (i.e. morning arrival, snack time, getting ready for lunch, lunch, packing up to go home, waiting for the bus…).
Write down the things you typically say to the student during that time.
Highlight the core words on the page.
Looking at your list of what you usually say, are there any places where you could use core vocabulary instead of a fringe word?
Hang that paper up in the area where that routine occurs.
Have the device available and model the core words as you speak to the child during that routine.
Here are some examples of core vocabulary (using the Project Core words) that can be modeled. The Smarty Symbols used in Speak for Yourself accompany each core word in the examples below. This likely goes without saying, but just in case…If your students have more vocabulary opened on their devices, don’t remove words.
In the morning:
Good morning modeling template with core vocabulary using Smarty Symbols in Speak for Yourself.
At Snack Time:
Snack Time core vocabulary modeling template using Smarty Symbols in Speak for Yourself.
At the end of the day:
Time to go core vocabulary modeling template using Smarty Symbols in Speak for Yourself.
Thanks to the Project Core team for all of the work that went into creating a versatile resource to target literacy for students using AAC and sharing it for free online!
Jess, a 25 year old with Angelman Syndrome, waiting anxiously for the Mamma Mia musical to begin on stage.
“Presume Competence” has become a mantra of many excellent parents and professionals who are implementing augmentative and alternative communication (AAC) for individuals with complex communication needs (CCN). I’ve also experienced some misunderstandings in person and in online groups suggesting that presuming competence is not evidence-based in its idealism. So I’ve been paying attention to the things that people who presume competence do in practice.
Presuming competence is not idealism. Idealism ignores that there are challenges or barriers to overcome. The very definition is that the ideals are often “unrealistic.
Presuming competence is a philosophical difference. It’s a belief in socializing students for courage instead of compliance.
It is more than an ideology because when you start from the mindset that someone is capable and can grow, your actions start to reflect that. There are concrete, evidence-based ways that you can presume competence.
Provide comprehensive, robust AAC early
If children are not developing verbal speech, when you presume competence, you acknowledge that they still need a way to access language. Children typically start saying their first words when they’re a year old. Children who have CCN also need to have the opportunity to access first words, and the opportunity to choose what their first word is. If they’re given a limited AAC system, and they have to work their way up to a larger vocabulary variety, they may not be motivated by the available vocabulary. Give children with CCN the ability to explore, just as verbal children have the opportunity to babble until they can use words purposefully.
Look for success
We see what we look for…and often miss what we don’t. There is quite a bit of research about visual attention. When we presume competence, we look for students to be successful and build on those moments. Recognizing that we are all subject to cognitive biases such as confirmation bias. We look for evidence that agrees with what we already think. Look for the moments where students show you their intelligence. When you look for them, you will find them.
Take the “blame” if the child’s telling you something and you’re not understanding it
Take it upon yourself to be a better listener. If individuals with CCN are taking the time to try to tell you something you can be pretty sure of 2 things: 1. That it’s important. 2. They are trying their hardest and using every tool they have. Make sure that that they know you’re reciprocating that effort.
Have high expectations and provide whatever support is required to meet them. Consider the barriers and obstacles and problem solve to overcome them without placing blame on the child. When you can look at the environment, communication partner, access or activity as the reason for a communication breakdown, you can make changes. If you say that the child is the reason, you haven’t left room for modifications to support success.
Limit physical prompts
Eliminate hand over hand prompting for students who don’t physically need help. Instead of taking a child’s hand and pulling it to the button you think they want, use aided language input or gestural prompts. When you presume competence, you’re giving the individual the benefit of a respectful interaction.
I’ve seen some research to support the idea of using least to most prompting strategies this week, which is exciting! There are links to two studies here and here.
Take a look at this 14-second video. We had been playing music. I thought she was stuck and having a hard time finding music. I was wrong. If I had hand over hand prompted her to say “music” and then played it, she wouldn’t have had an opportunity to tell me what she really needed.
Click to play
Value individual communication preferences
If your colleague waved to you in the hall as you walked past her, you wouldn’t grab her and insist that she look at you and verbally say, “Hi” to you. If someone chooses to wave to you, you can wave back or you can say, “Hi, how’s it going?” We use universal gestures to communicate, and we accept that as humans in a society.
When you’re trying to teach a student with CCN to greet people, you can presume competence by modeling different options as you greet them, but accepting their nonverbal greeting as you would from any other child. It’s also important to recognize that greetings often aren’t reinforcing for kids. And sometimes, separate from the CCN, kids are shy…especially when they’re put on the spot.
Topics of conversation also vary by individual preferences. Imagine someone at work starting to tell you about their children or a favorite show that just had a shocking plot twist. Would you ignore them? If your second grade niece started talking to you about American Girl Dolls, would you refuse to acknowledge her interest and instead start giving her math problems to solve?
Presuming competence is more than believing that a child is competent of thoughts, ideas, and learning. It is also the practice of making sure people – ALL people – have the right to talk about what THEY want, even if it’s not the topic we planned.
Listen with all of your senses. You know those annoying “whole body listening” posters? The ones that show the child sitting quietly in a desk with a closed mouth and feet still on the floor, hands folded in front of them, eyes looking in the direction of the speaker…yeah, not like that. Listen to understand, whatever that means for you. Watch their movements and look for patterns. Listen to the words they’re repeating over and over. Consider that it may not be “stimming” or a lack of understanding of AAC. Figure out what they’re trying to tell you.
I worked with a little guy in a preschool classroom a couple of years ago, who has since become verbal. He was apraxic and was using Speak for Yourself and able to put 3-4 words together.
One week, I went in to visit and the teacher said, “Every time I hand him the device, the only thing he says is ‘all’ the entire time.” The device was sitting on the bleachers while he was at gym. When I looked at the device history, he had, in fact, said “all” 512 times in the last 10 days.
As soon as he saw me, he ran over to the device, and said, “all.” When I said, “All of what?” he repeated it a few more times. Then he touched the upper right corner of the device where the Babble button (which opens ALL of the words in the Speak for Yourself app) SHOULD have been…but someone had locked it. I said, “You mean ALL of the words!” I went into the app settings and unlocked the Babble feature. He toggled all of the words “on,” smiled at me and said, “all.” I added and modeled, “all (the) words.” He was thrilled…he stopped saying “all” incessantly. His problem was solved.
He tried to get his point across over 500 times before someone was able to figure it out. (The teacher and staff were excellent, and I don’t know if I would have figured it out if he didn’t give me the extra clue. I also have the advantage of knowing the app really, really well:) Isn’t it amazing that he kept trying?
Also, accept that a student may not be able to listen like the cartoon posters. For some of our students, it takes so much focused energy to hold their bodies still, that if they are doing that, it’s unlikely that they’re able to listen attentively.
Acknowledge that People are Complex
There are things that we just don’t know. We spend a lifetime learning about ourselves. It is impossible to know everything about another person.
Presuming competence isn’t about belief in students in the absence of evidence. It is a belief in their right to access the communication to demonstrate their abilities as humans. You’ll never gather evidence without providing opportunity. So when you’re marking down minuses on data sheets, ask yourself, “Is it possible that there isn’t an adequate way for them to show me that they know this?” When you do that, and acknowledge that there are a range of variables between a plus and a minus, you start to problem solve for your student(s) instead of testing them.
Presuming competence is giving students the opportunity to learn literacy, math, science, and history regardless of their disability.
It is providing the chance to build relationships. It’s exposing students with CCN to leisure activities and allowing them to decide if it’s something they enjoy.
Presuming competence gives children a chance to explore and make mistakes without penalty. It gives them time to learn with support rather than testing or criticism. When you presume competence, you give the child a safe place to fail and the ability to learn from those small failures and try again. It’s how we grow. That growth and the confidence students gain from overcoming challenges gives them the courage to keep moving forward and develop skills to demonstrate their competence.
Owen looking at his vocabulary on the Speak for Yourself AAC app on an iPhone for the first time.
We have a few announcements to make…Two pieces of good news and one necessary business decision.
First, the business decision: Effective with the release of the 2.6 update, the price for the Speak for Yourself AAC app will be $299.99 USD. We reduced the price to $199.99 USD in October of 2012, and that’s where it has remained for the past four years. With added licensing costs for voices, updates and ongoing development, it’s necessary to increase the price at this time. If you already own Speak for Yourself, or if you purchase it prior to the next update (version 2.6), this will not affect you.
Second, effective in the next update, we are eliminating the in app purchasing of multi user slots. This means that you will be able to store up to 40 different users within the Speak for Yourself app with no additional cost! Most importantly, you will have access to those multiple user slots with no need for any additional App Store downloads.
Finally, we announced last week that the Speak for Yourself Augmentative and Alternative Communication (AAC) app is going to be available on the smaller iPhone/iPod screens! Once you own Speak for Yourself, getting the iPhone version will be just like downloading SfY from the cloud on any device. Tiny SfY will be part of the Speak for Yourself app. There will be no additional cost.
A Closer Look at Tiny Speak for Yourself
Speak for Yourself on an iPad Pro 12.9″, iPad Pro 9.7″, iPad mini, and iPhone 6s
When we were initially developing Speak for Yourself in 2011, the iPhone of the day was the iPhone 4. It was amazing! It had TWO cameras! “Selfies” were new and exciting! There were AAC apps at the time that were available on the iPhone (and there still are), but when we looked at the small buttons of the Speak for Yourself app and the relatively quiet speakers of the iPhone, it didn’t seem to be a viable option for us. The screen was 4.5 inches high and 2.31 inches wide. We anticipated that phones would get smaller, and accessing 120 buttons on a screen would be unrealistic. “Maybe we’ll create a more limited vocabulary for the smaller screen down the road,” we thought. (Not so fun fact…Developers have to program/code separately for iPads and iPhones).
But our technology forecasting was wrong…The iPhones actually got BIGGER! We started discussing the usability of the iPhone as an AAC option.
So last spring, for fun, our developer put a rough version of Speak for Yourself on his iPhone, and it felt like Christmas! It was usable! The TinySfY buttons were twice the size of the keyboard buttons that we use to text and email everyday! Without any vocabulary limitation at all, we could accurately touch the little buttons. We tried it with a few of our trusted, local Speak for Yourself users, and they were immediately able to use it. We had our proof of concept.
As I wrote on our Facebook page:
My iPhone 6s in its LifeProof case measures 5.75 inches high by 3 inches wide, and there’s the larger “plus” options for an even larger display size. Many of us have our phones permanently, comfortably attached to our hand. We’re always ready for a quick social media update, text or Google search.
Now, Speak for Yourself users and the people who love and support them can be just as ready to make a quick comment, convey urgent medical information or correct a misunderstanding.
By the end of this year, if you’re using Speak for Yourself, your iPhone or iPod Touch will be able to put 14,000 words in the palm of your hand! The Speak for Yourself app is currently in beta testing for use on an iPhone/iPod.
The buttons are small, BUT they are twice the size of the keyboard buttons that we all use, including our students who flawlessly use mom or dad’s phone to search for videos on YouTube. Many of our users will be able to access Tiny Speak for Yourself (Tiny SfY), but even if they are not able to access it, having the app on an iPhone also puts the ability to model seamlessly into the hands of parents, professionals…and siblings. That may be the biggest game changer of Tiny SfY.
Take a look at Owen exploring his vocabulary file on an iPhone for the first time:
Jess (whose mom Mary writes the You Don’t Say AAC blog) thought it was pretty cool too! I handed my iPhone to her to explore while I was trying to put Speak for Yourself on her mom’s iPhone before her trip to Maui. Jess said “attempting” using Tiny SfY.
Jess exploring Speak for Yourself on an iPhone.
During a recent visit to NJ, Nathaniel’s mom and dad mentioned that they would like to be able to use it on his phone. So, when it was approved for beta testing, they were given the chance. Nathaniel’s mom, Kim wrote about it in her Hold My Words blog. Nathaniel’s brother snapped this great picture:
Nathaniel checking out Speak for Yourself on an iPhone. Photo credit Josiah Rankin originally posted on Hold My Words blog
Our plan is to have the iPhone version released and available to EVERYONE by the end of 2016! Stop by and visit us at ASHA in a couple of weeks to check it out!
Speak for Yourself is 50% off ($99.99 USD) October 12th, 13th and 14th, 2016 for AAC Awareness Month!
There are literally hundreds of Augmentative and Alternative Communication (AAC) options on the market! Making a choice can be overwhelming for parents and professionals. However, if you’re looking for a robust, comprehensive AAC app, that narrows the options considerably.
Often in AAC online groups and in person, parents and speech-language pathologists will ask for comparisons between two apps or language systems.
I’ve created a chart to compare the options you’re considering.
A few words/disclaimers about this chart:
First, I created this chart, and I am also one of the creators of the Speak for Yourself app. However, I am also a speech-language pathologist who specializes in AAC. My attempt in the categories was to be balanced. I only filled in information about Speak for Yourself (SfY) because I only represent Speak for Yourself. There are some individuals who would not be best-served with Speak for Yourself as their AAC system. For example, if your student speaks Spanish or requires an access method other than direct selection, Speak for Yourself is not going to meet those needs at this time.
Second, an AAC system should be chosen based on the individual’s needs, not on any one element in this chart. Some of the features may be absolutely necessary for your particular individual and some may not matter at all. Only that individual, you and the other important people in that person’s life will be able to determine that. If you’re doing an AAC evaluation, multiple options should be considered and trialed with the client.
Third, this chart is not a “finished product.” If you think I forgot something that’s important, I probably did. If you think there is a better way to quantify ease of programming than “minimum number of touches to add a word” (each touch of the screen was counted) or “number of touches to hide buttons,” you’re probably right. Let me know!
Finally, choosing an AAC system is only the beginning. The real work is in the implementation. When you’re choosing the tool, the skills to use it are not automatically included. Some of the features within the AAC systems are intended to support the implementation (i.e. the search feature in SfY can help support aided language input), but the tool alone is not going to give a child language. Handing someone an AAC device and telling them to build language would be just as unreasonable as handing someone a table saw and telling them to build a house. It takes time, support, adjustments and patience. Those things are not found in a chart.
I’ve written before about the Evidence-Based Research Behind Speak for Yourself, but my intention with this post is to provide a brief, but complete overview of Speak for Yourself for those of you who are feature matching for your children or clients who will benefit from AAC.
There are a lot of similarities between AAC language systems. I can understand how someone looking at two AAC systems for the first time could say, “What’s the difference? It’s a grid of buttons filled with words and symbols.” So here are some of the components of Speak for Yourself that are found in other robust language systems as well.
While I avoid making direct comparisons between Speak for Yourself and other companies’ products, I will explain what makes Speak for Yourself unique enough to have a patent approved earlier this year. So if you hear or read something that says, “Speak for Yourself is just like…” that source of information is incorrect.
There are key differences and the reasons many people who previously struggled with AAC are successfully able to use Speak for Yourself.
What are the differences?
First, the AAC user can access every word in Speak for Yourself in no more than 2 touches! This eliminates page navigation and gives the user feedback in 1-2 touches for every single word.
SfY is essentially a page/category based system with a focus on core vocabulary that keeps motor planning consistent.
The categories that are being used are accessible with two touches. You can also make any of the main screen buttons a single touch by turning off the link to the secondary screen for your users who need immediate auditory feedback. In other systems, you have to change vocabularies to be able to have access to buttons with a single touch.
Because every word in SfY is 1-2 touches away, it eliminates the need for extensive page navigation skills. AND your AAC user doesn’t have to know categories and subcategories to communicate with Speak for Yourself.
You can easily open and close vocabulary on the main screen and also on each individual secondary screen. This provides vocabulary that adjusts easily to the user’s language level.
For example, to mask vocabulary, you touch open/close and then you can close everything (by touching the bottom right corner) and then touch the buttons you would like to keep open. You can do that on every page. This differs from other systems where each of the buttons has to be programmed with the “behavior” and whether or not it is showing.
The symbols in SfY are meaningless. (We love them of course! They’re meaningless in relationship to the vocabulary organization.) They are there as landmarks for the user. For example, since the symbol for “like” is a heart, users get used to going to that area and may scan quickly for the heart. BUT you could change it to any other symbol and it has no bearing on any other button in the app. So users don’t have to understand symbolic representation before they can use SfY. We have quite a few users who started prior to their first birthday because we did everything we could to eliminate the need for “pre-requisite skills.”
Search Feature: This is a game changer. There is a multisensory search feature in SfY that visually prompts you to the word you want to model/use by highlighting it. When you follow the prompt by touching the highlighted button, the word is spoken (auditory). The search feature navigates you to the word and if that word is not part of your open vocabulary, Speak for Yourself opens it for you.
This feature has given parents, teachers and speech-language pathologists the confidence to model and use AAC. When the adults are successful, children are supported.
The Babble feature allows the user to toggle between their custom vocabulary open/close configuration and having access to all of the vocabulary for exploration.
Hold that Thought gives individuals the ability to place something they were saying on the “back burner” to express a more urgent message. It’s also used by some to compose stories or messages that they want to tell someone later.
History feature that tracks raw data, but also summarizes that data so that it’s easier to analyze and measure progress (and that data can be emailed). It’s FREE within the app.
Back up and restore the vocabulary data file for each user through email, iTunes or Dropbox quickly…and for free.
What you will NOT find in Speak for Yourself:
The ability to change the size of the buttons. Changing the button size would
Adorable little girl with purple antenna headband uses Speak for Yourself with the support of a keyguard.
require changing the motor planning and limiting the individual’s access to vocabulary and language significantly. This post provides more details about the button size and some ideas for AAC users who have fine motor/vision issues.
Multiple languages – Speak for Yourself is only available in English. We would love to have it available in multiple languages. We’re just not there yet.
Visual Scene Displays – This is probably a blog post for another day, but scene-based AAC consists primarily of nouns or pre-programmed sentences surrounding specific contexts or photos (often without input from the person using AAC). They may be useful for labeling or noun identification, but not for spontaneous, novel, generative communication.
The ability to link pages beyond two touches. Within 2 touches, Speak for Yourself users have the potential to access more than 14,000 words. That’s a lot of efficient, accessible language.
Hope this helps for all of you making comparisons and trying to find the best solution for your children, your clients or yourself.
Here’s a Speak for Yourself feature page:
Good luck on your AAC journey! We’d love to be part of it!
We take so much for granted. When you know and love people who use augmentative and alternative communication (AAC), it makes you more acutely aware of how much we take communication for granted. And often, when you know and love AAC users (or if you are an AAC user), you don’t assume that everyone is able to talk. We know that sometimes people struggle to communicate. We know that some people have beautifully formed words in their minds that they fight to express.
In addition to the frustration that accompanies knowing what you want to say and not being able to say it, people often assume that AAC users have cognitive impairments.
So now, if you’re the person who’s trying to get your message across through AAC, you have to figure out a way to do that so that someone can understand you. AND you have to do it quickly and accurately so that listeners will believe you’re competent.
Not only do you have the frustration of having to communicate in an “alternative” way and the stigma of people questioning your intelligence, there is the added pressure of having to do it quickly to keep up with conversation so that your communication is relevant and on topic.
But that’s still not enough.
We all make mistakes constantly in our speech and language. We say the wrong word, shake our heads and correct ourselves. We get information wrong and immediately say, “Oh never mind. I don’t know what I’m thinking today.” We watch our listener for cues that our messages are being understood, and if we are not understood, we clarify. We are impressively adept at communication repair.
When you’re using AAC, and you say the wrong word, many of the cues we use in verbal speech are missing. The AAC user isn’t able to stop mid-word and correct the mistake. They’re not able to change their tone of voice to emphasize the word they meant to say. Even the nonverbal facial cues like shaking your head or the slight “ugh” can be challenging when your motor movements don’t always cooperate. So AAC users find different ways.
I’m sharing this video of Jess (with her and her mom’s gracious permission of course) because in this 47 second video, she makes 2 clear mis-hits and handles them very differently based on my reaction.
Last weekend, Jess and I went to lunch and the movies and then she came back to my house (that’s important in the context of this video clip).
When Jess was first learning to use Speak for Yourself, her fine motor and visual issues affected her accuracy. She would hit buttons around the area she was targeting and listen for the word she was trying to say. When she would get it, she would make direct eye contact as if to say, “Did you get that? That’s what I meant.” As she’s progressed, there are fewer mis-hits and she puts words together now so that signaling is not as reliable, and most of the time not necessary. For the most part now, she means what she says.:)
In this video clip, Jess’s communication repair skills are so impressive. We’re in the car, and I’m not looking at her because I’m driving. So as a listener, I’m relying on auditory output and whatever gestures I can see in peripheral vision (as you would with any passenger).
I ask Jess where she’d like to go for lunch and she says “up to eyeglasses store.” She laughs but then when she realizes I’m trying to figure out a place for lunch that’s by an eyeglass store, she says, “Accidentally. Ice cream shop.”
Remember, we’re driving in a car, and if you’ve ever tried to do anything that requires fine motor precision like signing a birthday card or putting on mascara in a moving car, you know it’s tricky.
When she says “eyeglasses store” and I try to figure out where she is talking about, she says accidentally because it was a mis-hit and she didn’t want me to go down that path.
When she says “glitter” by accident and I don’t say anything, she just corrects the mis-hit and says “Italian.” When I watched the video, I noticed that “glitter” is in a very similar location to “Italian” on the secondary screen. When I pulled out my iPad and looked at the location in Speak for Yourself, the buttons are actually in the exact location on the respective secondary screens.
The secondary screen under the word COLOR with “glitter” highlighted by a bright green background color.
The secondary screen under the word EVERY, with the word “Italian” highlighted by a bright green background color.
When she says, “Italian,” I say, “How about Pizzeria Uno?” as she says, “visited Les Miserables.” I didn’t realize it in the video, but we parked in Pizzeria Uno before we were going to see Les Miserables. I didn’t realize it until we pulled into the parking lot of Pizzeria Uno, which shares a parking lot with On the Border. When we went to see Les Miserables, we were going to eat at Pizzeria Uno but ended up eating at On the Border because Jess saw it across the parking lot and wanted to go there. I’m very flexible with restaurant decisions.
A word about phonology and AAC:
When her mom watched the video, she said that Jess might have meant she would be “less miserable” (since that’s how the device pronounced it, which I could have fixed if I wasn’t driving). It’s important to mention the use of phonology in AAC (which is exactly what Jess’s mom meant. Jess’s mom, Mary, writes the You Don’t Say AAC blog). People using AAC will sometimes use words phonologically, so even if the word itself doesn’t make sense in a given context, it sounds enough like another word(s) that make perfect sense in that context.
Fortunately, later that evening, Jess used phonology in her communication so that I can give you a perfect example. (I’m kidding of course. I’m reasonably sure that wasn’t her motivation at all). After the movie, we went back to my house and we were sitting across from each other on the floor going through my DVDs. She held up my Les Miserables DVD and I said, “Your mom didn’t care for that one.” She nodded, picked up her device and said, “You care it.” I said, “You’re right, I love it!” Then I thought, I’ve never seen her use the word “care.” When I looked at her message window, she had actually said, “You carrot.”
Since we’re talking about communication repair, if your child or student says something and the actual word doesn’t make sense, try saying it to see if it makes sense phonologically. Does it sound like something else? Some children who use AAC are not yet reading, and just like toddlers who are learning to talk, they rely on how words sound. At times, students and adults who are literate will also use words that sound like what they want to say if they’re able to access it more quickly. When we are speaking, no one can tell if you say the word “here” or “hear” based on only our auditory output. And even in our written language, we have issues with “your” and “you’re” and “there”, “they’re” and “their.” In a language of homophones, homonyms and multi-meaning words, don’t be limited strictly by spelling. Listen for the voice output and the meaning of the message.
No girl’s day is complete without a selfie.
Everyday, I am impressed by the creativity and motivation of the individuals I know who use AAC. It amazes me that the cognitive skills of AAC users are questioned and even worse, doubted. There is exquisite skill in the ability to use a device to accomplish the communication functions that verbal people accomplish with so many more tools. Even with our tone of voice, nonverbal cues, and ability to quickly revise what we say, there are misunderstandings.
Presuming competence for people who use AAC to communicate isn’t assuming that EVERYTHING they say is intentional. It also means realizing that sometimes they make mistakes in their communication that have nothing to do with their ability or intelligence. Mistakes are part of human communication. AAC users need the space for mis-hits and the time and the means to revise their message. We all want to be understood.
Various iDevices pushed together on a table, each displaying a different app. The center one displays a photo.
If the iPad had feelings, I’d be concerned about its self-esteem in the world of Augmentative and Alternative Communication (AAC). Its inferiority complex would be reinforced at conferences when speech-language pathologists who do AAC evaluations, pick it up with interest, but then put it down as they say, “Oh. It’s an app for the iPad? I only recommend devices.”
Well, the iPad is a tablet that deserves to sit at the AAC hardware lunch table with the popular devices. It seems to get snubbed as an AAC option because it wasn’t intended to be a “communication device.” Even though for many individuals with complex communication needs (CCN), it is exactly that.(I wrote about it here almost 3 years ago)
An engineer trying to make a circuit to record fast heart sounds invented the pacemaker. Scientists trying to make a wallpaper cleaner invented Play-doh, which brightens many of our days. Fortunately, someone looked past the intended function of those products and saw value in other areas.
Kleenex was invented to remove cold cream, but when people started using it as a disposable handkerchief, Kimberly-Clark Corporation (manufacturers of Kleenex) paid attention. Two years after Kleenex was first introduced, the company realized 60% of people using the product were using it to blow their nose, so they started marketing it for that purpose.
I can imagine people at that time clinging tightly to their used cloth handkerchiefs and saying, “the Kleenex is only a cold cream remover. I refuse to see it as anything else.”
The iPad was invented as a consumer tablet for work and entertainment purposes. When Steve Jobs revealed the iPad to the world, he said, “iPad creates and defines an entirely new category of devices that will connect users with their apps and content in a much more intimate, intuitive and fun way than ever before.”
Steve Jobs may not have been thinking about providing access to language through mobile device technology for individuals with complex communication needs (CCN). But why wouldn’t these individuals want to connect with their apps – ESPECIALLY their AAC apps – “in a much more intimate, intuitive and fun way than ever before?”
A child under 3, who’s not talking, can have an educated parent decide to put words in his hands. The iPad provides that opportunity. That child would be fortunate to receive any early intervention services at all. Most likely he wouldn’t have the opportunity to receive an augmentative and alternative communication (AAC) evaluation. Toddlers have a lot to say.
Adults with CCN, who don’t have access to school support, often lose access to many of the therapies and services that had been beneficial to them. The iPad gives them the opportunity to explore and use AAC if they find it helpful.
The iPad wasn’t intended to allow family members and adults themselves to decide that AAC support allows them to communicate more completely, clearly and/or with more complex language…but it does.
Engineers at Apple may not have meant to provide a dependable, lightweight hardware device with a long battery life. They may not have known that they were creating a device with the ability to run the AAC language system of choice for an individual user. But that’s what they did.
Just as the Kleenex provided a solution for non-cold cream wearing individuals who wanted to blow their nose in a more sanitary and disposable way, the iPad provides a solution for many individuals with CCN to have access to AAC.
We were early adopters of new technology long before the iPad was introduced to the world. As soon as a company released a new device, we were immediately in touch with local representatives. We wanted to know it. We would try it and go through our mental inventory of kids and their needs. We’d ask ourselves, “Will this new device help anyone we know?” We tried eye gaze technology as soon as we could get it in front of us. We saw the life-changing potential of that technology immediately.
So when Steve Jobs stood on stage and gave the world the first look at that 10 inch, sleek, lightweight device, our eyes widened. It was big enough to provide access to a lot of language, but so portable that A LOT of people were going to be carrying them around in public. We were early adopters of the iPad as a way to provide AAC to people who needed access to language and were able to touch a screen (use direct access). It also had something else that AAC systems previously lacked…affordability.
At the end of July, I had the honor of attending the AAC institute/I Can Talk camp in Pittsburgh to train the nearly 50 volunteers on the Speak for Yourself app. These volunteers were then paired with each of the 23 campers. Some were running various activities and offering a helping hand to campers, families and fellow volunteers when needed. The camp is a wonderful opportunity for children and their families to be around other AAC users and families.
The local news did a story about the camp. The reporter starts by saying, “Those things they’re carrying are NOT iPads.” I initially bristled at that statement because several of the campers were in fact using iPads locked into AAC apps.
AAC Institute campers looking at their devices, which are iPads with the Speak for Yourself app.
However, when I thought about it, she’s seeing it as more than “just an iPad.” When individuals are using a tablet and app that gives them a voice, it is more than an iPad.
She spent hours at the camp. This reporter interviewed camp directors, parents and adults using AAC. She watched the campers interacting, playing and socializing. She saw “communication devices.” When she said that the campers aren’t carrying iPads, it’s likely that her view of the iPad is that it’s a tablet used for entertainment and business purposes. “Those things they’re carrying” are so much more.
When we see someone using a traditional device, we don’t refer to it as a Windows or Android tablet with specialized software, even though it is. When someone says they are using Traditional Device X, it’s rare to hear or see anyone say, “Well, that’s not the solution for everyone.” I think the reason is pretty simple…We know that already. It’s common sense. Nothing is the solution for everyone.
Imagine telling a colleague you’re considering buying a Toyota, and she says, “Well, that’s not the solution for everyone.” It would be odd to have that response before you even discussed whether you were considering a Prius or Sequoia.
Yet, when someone mentions the iPad as an AAC option in an online group or in an in-person discussion, it is inevitable that someone says, “The iPad is not the solution for everyone.” Maybe they say, “AAC is not one size fits all.” That’s obviously true, but there are hundreds of apps and the availability of various robust language systems. There are case options to address portability and durability and 3 different screen sizes. The iPad IS absolutely an AAC device solution for many.
It may be a residual response from those early days of the iPad. School districts and parents were buying iPads and saying, “Make it work.” Maybe some of the professionals in the field are still recovering from that trauma. At the time, there were very few AAC language app options. “Making an iPad work” meant reprogramming entire apps and undoing (at times) years of language building and AAC implementation work.
The iPad is still not the a viable choice for some AAC users. It’s generally not an option for individuals who face extremely complex physical access challenges…yet. The iPad does not support eye gaze technology or incorporate environmental controls.
However, for many children and adults who are able to physically touch a screen (direct access), even with fine motor and visual issues, an iPad is a viable hardware option to consider for AAC use.
The field of AAC acknowledges bias in software systems pretty readily. Practitioners are often bias based on their knowledge and familiarity. They have seen someone/a lot of people communicate successfully using XYZ language system. It makes sense. Of course people are going to think the AAC system that gives their child or students a voice is awesome. We are all bias.
Bias becomes a problem when it causes close-mindedness. It’s an issue when it interferes with the decision that would work best for the child/client.
If you are an AAC evaluator who doesn’t consider the iPad as an AAC hardware option for direct access AAC users, you are bias. Your personal preferences are interfering with an option that may be best for your clients.
If you are a parent or a professional, and your child can access a touch screen, an iPad is a valid consideration. If the evaluator didn’t consider it as an option, ask for their reasons. Here are some explanations they may give, and some responses to those reasons.
When I’ve presented or spoken with professionals, their objections to an iPad for direct access users are often outdated. It’s as if they made a list of iPad “negatives” six years ago, saved them to copy and paste into reports and never looked back. If you’re one of those professionals, it’s time to take another look. A lot has changed in the last six years.
We are not bias to the iPad as a hardware device for AAC BECAUSE we created the Speak for Yourself app for the iPad. In the classic chicken-egg conundrum, it was the other way around. We accepted that the iPad was a dependable, portable, affordable, long battery life, capacitated touch (which many of the traditional devices at that time lacked) option. SO we created an AAC software app that provided a strong language system to run on the iPad.
If you’re working with individuals who are trusting your professional opinion about their communication, that’s an immense responsibility. Those individuals are depending on you to be open-minded about the hardware and software options that exist. You’re trusted to consider the option that will provide the long-term best solution for that individual.
If you’re a parent, a professional or someone who uses AAC, and you feel like an evaluator is making a recommendation based on their own bias, ask questions.
As a field that embraces technology, it’s time to acknowledge that the iPad is not an inferior or temporary AAC option.
It’s time to realize that mobile, direct access AAC users benefit from the availability of Apple Stores when there is a hardware issue that can be corrected immediately.
Students often lose valuable time waiting for insurance companies to approve a dedicated device. In many school districts, these same students walk past “spare” iPads sitting on carts each day.
If you’re someone who would have been clinging to a handkerchief and talking about these newfangled Kleenex being a passing fad, it’s time to look again. Regardless of the hardware or software selected, the AAC implementation is crucial. It’s going to take time. Our students and children who are waiting to communicate don’t have time to lose.
I hope everyone is enjoying the summer and recharging for the new school year! In response to the modeling challenge issued by Dana Nieder from the Uncommon Sense Blog, some adult Augmentative and Alternative Communication (AAC) users have expressed an interest in a 21 day challenge geared towards adult/teen users. If you are looking for a way to increase fluency and use, give it a try! You may also find some of these activities helpful if you are a family member or professional who wants to learn an AAC system to support the individual(s) with complex communication needs (CCN) in your life more fluently.
If any of the activities make you uncomfortably anxious, modify them for your needs.
This challenge incorporates increasing vocabulary (semantics), socialization (pragmatics), expanding use of verb tenses (syntax), increasing device fluency, generalization and operational skills (programming and backing up vocabulary)! Have fun!
May is #BHSM so last month I shared an #AuthenticAAC moment each day that didn’t go exactly as planned, something I missed or something I would have done differently in hindsight.
My intention was for anyone reading who was afraid to implement augmentative and alternative communication (AAC) to be more comfortable in the knowledge that things aren’t always going to go as planned. You’re going to make mistakes and miss communication opportunities, but those “mistakes” give you experience and the opportunity to improve.
Unexpectedly, for me, focusing on my shortcomings for the month has made me a more relaxed and patient clinician. I always thought I was patient, but as I was posting and then reviewing my shortcomings, I started finding patterns. The awareness of those patterns made me more conscious in my interactions with students. There is a different level of self-monitoring when you’re going to post something publicly. As an example, I would think things like, “I seriously can’t make the mistake of not giving enough wait time again when I just messed that up yesterday!” So I would wait beyond my previous comfort level, and my student would have more time to gather thoughts.
Here are my posts for the final week of May:
Monday May 23, 2016:
My student sat next to me on the sofa.
She looked intensely at my watch and then her eye twinkled and she said, “watch off.” Naturally I took it off. She said “on,” so I went to put it back on but she stopped my hand and said “on” again.
I thought maybe she wanted to wear it, so I tried to put it on her wrist closest to me.
She pulled her arm back and said “on” again. I tried the other arm, but she pulled that one back as well, and said “on” again.
I tried my other wrist, and she stopped my hand.
I said, “I don’t know what you want me to put it ‘on’.”
She said, “On feet.” I slid it over her foot and her face lit up.
I should have just asked from the beginning.
I also should have been modeling “wrist.”
Authentic AAC moment with screen reading “on on on on on feet.”
Tuesday May 24, 2016:
Most of the #AuthenticAAC moments I’ve shared have been things that have happened with my full attention on the interaction with the student.
Today, we had just been on an outing to shop for craft supplies. I was talking with the teacher about the project plan while the student had a few minutes to relax since we just came back. He was watching YouTube.
There were Oreos sitting on the table and I heard him say, “Oreo” on his device and I passed one to him. He vocalized and sounded irritated and when I looked at him, the cookie was still in his hand. I said, “What’s wrong?” And he took an exasperated breath and said “chocolate Oreo thick” (Which is what I had handed him).
When I looked at the message window, I realized he had asked for a “vanilla Oreo.” I apologized and got him a vanilla Oreo as he ate the chocolate one.
Then he asked for another “chocolate Oreo thick” (Which is when I took this picture).
Authentic AAC moment with image of a chocolate Oreo in a student’s hand.
Wednesday May 25th, 2016:
I was in meetings all day today so unfortunately, I didn’t get to see any students. Days like this are necessary, but much less interesting.
Authentic AAC image of a laptop screen.
Thursday May 26th, 2016:
Today, my plan was to model for a student during her academic time to add some content vocabulary and of course core words. My student wrapped her arm around her iPad mini on the desk to shield me from modeling on her device. She looked me in the eye and used the device to say “mine.”
So, I figured I’d use dual device modeling instead and pulled out my iPad mini. I opened the app and said, “Okay, I’ll GET MINE” (modeling the words in caps). My main screen configuration was different and I had color coded some of the words. My voice was also different.
As soon as she saw it, she immediately began exploring what I had open and then used her device to say, “color eat” (eat was one of the words that was color coded on my iPad).
She then began exploring my iPad and comparing the words to her device. She found the word that were the same and I modeled “same.”
Authentic AAC moment with arms interlocked and the bottom of the Speak for Yourself app screen.
She pinned my arm under hers to keep my iPad within her reach (in the photo, her mint green sleeve is holding my arm with the bracelet in place. Sorry it’s so close. I couldn’t move back any farther).
She then went to my TO page which is where her numbers are located, but mine didn’t have them. She used my device to say, “are same” and then went to her device, touched “TO” and saw that all of her numbers were there. On her device, she said, “no,no,no,no..” Until the message window was filled with “no”s.
I didn’t realize until much later that she was trying to tell me that those screens are not the same. I didn’t realize it until she did it again with another page and actually said “cards” on my iPad on the PLAY page and went to the PLAY page on her iPad and pressed the button in that same place, which is “bubbles.” She then said “same no” and I caught on (and modeled “different.”)
It’s always so obvious after the fact.
Also, I didn’t model any content vocabulary.
Friday May 27th, 2016:
This is the final #AuthenticAAC post for the month, and it’s a perfect way to end a very full month! Thanks to all of you who have joined in and shared your own moments and for all of you who have taken the time to follow mine!
I spent a good part of my day writing reports.
When I write about students, I try to write so that despite their challenges, if they read what I wrote when they’re older, they’ll know that I looked for their strengths and believed in them.
Well, today, Jess confirmed that it’s a really good standard to follow.
Prior to hitting record, I said “I’m glad we have the chance to hang out tonight” and she said “am” and I hit record…
I’m sharing an Authentic AAC moment each day this month for #BHSM, because sometimes, even with a lot of AAC experience and knowledge, things are missed, mistakes are made, and plans don’t work out. Thanks to all of you who are also contributing with this month of #AuthenticAAC. In case you’re not following the Speak for Yourself Facebook page, here are my posts from this week!
Monday May 16th, 2016:
Authentic AAC photo of Inside Out books with Sadness book leaning on the box of “Mixed Emotions.”
Today, a toddler student was trying to say something to me verbally. It was six syllables starting with an “m” with several vowel sounds with short pauses to separate the “words.”
I thought the first word was “mad.” She loves the Inside Out books (pictured below) and “Anger” is her favorite. (I think because she likes my pretend dramatic mad voice and closed fists on the floor). I pulled the books out and said, “Is it something about mad?”
She looked confused and repeated her verbalization exactly. I moved the device closer to her and asked if she could tell me or give me a clue (she’s very new to AAC). She used it to say “give.”
I repeated “give” and tried to figure out context clues, but with each guess, she repeated the same series of syllables, as if I should be able to get it. I asked her if she could show me and she thought but then said “no.” I apologized and said I’d keep trying, but she sighed, laid down and said “no.”
After a few seconds, she popped back up, went through the Inside Out books and handed me “Sadness.” She used the device to say “read” and verbally said “please.” I read dramatically like I usually do and put my head in my hands and fake sobbed because that always makes her giggle. It worked. By the end of the book we were both laughing, but I wiped away a real tear.
The reality is that sometimes – a lot of times – I don’t figure it out. Some days, the best I can do is say, “I’ll try again tomorrow.” And I will.
Authentic AAC: Sometimes students aren’t as excited about geese families.
Tuesday May 17th, 2016:
On a community outing today, we had some extra time (the bowling alley didn’t open until later), so we stopped at a park.
As we pulled into the parking lot, there were families of geese. I thought it would be a great opportunity for language and modeling because there was a lot going on, and it was a nice “experiential learning opportunity.”
When we got out, the student was less than thrilled. He wanted to go directly to the bowling alley. He looked at the ducks and geese, and I talked about them. When I tried to model, he moved the device away from my hand and asked for a snack.
I gave him the snack and tried to model again.
The babies were running to keep up with the mom. In my mind, I modeled “running fast,” “trying to catch up ” and “following mom.” But even after his snack, he said “stop,” and pulled the device away from my hand. He wasn’t as excited about the geese as I was.
So we walked and looked, and I took some pictures in case he wants to talk about it another time.
Wednesday May 18th, 2016:
Today I was in a classroom modeling during a student’s instructional time. She was being asked to read a sentence and she was choosing to do it verbally. I was waiting to model the word until after she said it (to give her the chance to use the device if she chose to and so that she was reading it rather than repeating after the device).
My finger accidentally touched the main screen button and the device navigated to the secondary screen. I stopped but the student looked at the secondary screen. I waited but then touched the home button to go back to the main screen.
When it happened a second time (yes, my finger got too close to the screen twice!), we realized she was looking at her reflection in the black part of the screen.* So I modeled “look in mirror,” and the teacher found a mirror. The student was all smiles.
When accidents lead you down a different path, sometimes it would be a worse mistake to ignore it.
Authentic AAC photo of a hand reflected in the black space of the iPad with the Speak for Yourself AAC app.
*The background color can be changed to a less reflective color if the reflection is too distracting for a student.
(Photo is of my hand reflecting in an iPad screen because I forgot to ask permission to take pictures:)
Thursday May 19th, 2016:
Students should have access to #allthewordsallthetime…unless they put their device down in the grass during gym to play ball with friends.
#AuthenticAAC moment when a student puts his device in the grass to go play ball.
Friday May 20th, 2016:
I sat down with my little student as she was finishing her lunch. She wanted to put her baby doll that she was holding in a “Jolly Jumper” (it’s a swing of sorts that hangs from a doorway so a baby not yet walking can sit in it and jump) that was being cleaned and re-assembled. I told her what it was and that when she was finished eating, we’d put her baby in it. When it was clean and reassembled, the teacher put it aside.
A few minutes went by and we talked about the Itsy Bitsy Spider and her upcoming trip. She finished eating, and said she wanted to get “out” and “read” and “color.” I said, “Sounds like a plan” and helped her out of the seat.
We sat on the floor, and I started to pull out books and markers to reinforce the plans she made.
She verbally said “baby” and I said “Baby wants to listen to the books too?” And she said, “Baby muh muh muh.” She saw that I was confused so she pointed towards the jolly jumper and said “baby in.” At that point I remembered that I told her after lunch she could put the baby in and I said, “Oh! You want the baby to jump jump jump!?” And she smiled and nodded.
If not for her awesome communication repair skills and multi-modal communication, I would have forgotten all about the jolly jumper. It was out of sight, out of mind for me…but not for her.
As I’m keeping track of my AAC “errors” this month, I wonder how many times these communication attempts are missed without me ever realizing it.
Authentic AAC photo of the baby doll tossed aside after her time in the Jolly Jumper.
This month, I’m sharing the #AuthenticAAC moments that I learn from each day during #BHSM (Better Hearing and Speech Month). Then I’m posting a round up each week for those of you who are not on the Speak for Yourself Facebook page or prefer to read all at once. I hope you gain confidence in your ability to use augmentative and alternative communication (AAC) with individuals in your life. The reality is that you can be successful and imperfect simultaneously.
Which brings me to this week…
Monday May 9th, 2016:
A couple of weeks ago, I posted about the eyes that turn your hand into a puppet and how much fun a student had when we used them. I’ve seen the student several times since then, and we’ve done a bunch of other stuff.
Today our conversation went like this:
Her: LOOK and pointed to my bag
Me: Sure! We can LOOK for something to PLAY or READ (modeling the words in caps and going through the bag).
Her: LOOK (pointing to her eye) verbally says “eyes.”
Me: “Yes, you LOOK with your eyes!”
Her: (smiles and touches her eye) verbally says “eyes.”
Me: Here, do you want to LOOK with your eyes to find what you want? (leaning the bag towards her.)
Her: Pulls out the “eyes” (shown in photo), verbally says “eyes” and nods her head excitedly.
Me: (feeling pretty dumb) Yep, you’re exactly right. Those are eyes.
Her: Giggles and nods her head as she puts them on her fingers.
#AuthenticAAC picture of an adult hand holding google eye hand puppets over a bag of toys and books.
We had referred to them by color last time we used them. I’ve never heard her say “eyes” before. It’s SO obvious in hindsight, once you have the context, but I didn’t make the connection until she pulled the eyes out of the bag. I could have missed a huge opportunity to reinforce her persistent attempts to tell me what she wanted to look for in the bag.
Fortunately, I felt like I was missing something and let her go through the bag. Fortunately, she knew I was trying to figure it out – or she was so motivated by those eyes – that she didn’t give up. Without either of those factors, it could have been a very different outcome.
Tuesday May 10th, 2016:
In an effort to give a student access to more specific language and avoid confusion and frustration, I added choices like “barbecue chips” and “pretzel sticks.”
If someone is going to the store and asks if he wants anything, I wanted him to be able to ask for exactly what he wants. (He would say “chips” or “pretzels” but not have a way to specify flavor or shape preference).
So, when he would say he wanted “chips,” I’d ask, “What kind?” And he’d respond. However, by doing that, I’ve inadvertently created a motor/language pattern where he says things like “chips barbecue chips” and “pretzels pretzel sticks.”
Speak for Yourself screen with the message window reading “vanilla oreo thin chips barbecue chips.”
I don’t want him to have extra hits and more work, and I also want to make sure he’s clear to less familiar listeners.
What I *should* have done instead is to model the choices to him initially rather than asking for him to be more specific after he made the general request. (He was already able to make the request and respond with more specific information so I didn’t model choosing one or the other). I could have also varied how I asked the question with the choices (“Do you want barbecue or regular chips?”) instead of waiting for him to ask, but that gets tricky because then I’m assuming he wants chips.
To “fix” it, my plan is to bring several of the specific items and model only the specific item name without the more general term. He’s a smart guy, and I think he’ll realize and appreciate pretty quickly that he gets what he wants without the extra hits.
Wednesday May 11th, 2016:
It would be wonderful if every student was excited and engaged all the time. BUT that’s just not reality…for anyone!
AuthenticAAC moment with the student slouching on a sofa next to his AAC device covering his face with a pillow.
In these Authentic AAC situations where a student is physically closing me out, there are a few things I try:
Modeling some options of things we can do that the student typically enjoys.
Pulling items out and starting to use them.
Modeling “tired,” “need sleep” or “leave me alone please” to see if the student responds.
Sit quietly and wait.
Some days, the first thing I try is “correct” based on what the student needs.
Today, I tried all four.
I was quiet for a solid 2 minutes (it sounds short, but it’s long in real conversation time) before the student, with the pillow still on his head, decided to talk to me.
Thursday May 12th, 2016:
I went into my session today with a plan: we were going to target two-word utterances.
I started strong and modeled some two word phrases (read book, play ball).
Then, my little student wanted markers. And she was dressed beautifully. And she was sitting on the carpet. And she independently said “off green” so that I would take the lid off of the green marker.
But very quickly, the lid was off of the blue….and red…and purple. My hands were busy alternating between protecting her clothes and the floor and helping when she said “off” and “help” to get the marker lids off.
So, I modeled two word utterances a whopping four times for the entire session.
Sometimes goals change. Today, I kept the floor and a cute outfit marker-free. A giggly little girl said a lot of one word utterances and one two word utterance independently.
We’ll work on two-word utterances next time. On the days when my goals don’t align with a toddler’s, I decide pretty immediately that I’m the one who has to be flexible.
We plan and toddlers giggle, and that’s just as it should be.
Image is of little feet with stylish sneakers crossed at the ankles by an iPad mini with the Speak for Yourself app. There’s a coloring page on the opposite side of the feet.
Friday May 13th, 2016:
This Authentic AAC moment is two parts…an update to my post earlier this week and something I could have done better.
Remember this week when I said a student had paired the general term with the more specific item and was saying both every time? It wasn’t the worst issue because most people would understand, but there’s no reason to be redundant, especially when communication is already a challenge.
I’ve had 2 sessions with him since that post. Last session, he told me to bring pretzel twists next time.
Image of the Speak for Yourself screen with the message window filled. It reads “pretzel twists bus pretzel twists pretzel twists pretzel twists now pretzel twists.”
Today, on an outing he asked for “pretzel twists” in the store…which I had left on the bus for the ride home. He had already had some baked chips. So I modeled “bus” and told him I’d give them to him as soon as we got back on the bus. He was having none of that. He verbally said “no” and then “pretzel twists pretzel twists now pretzel twists.”
As I ran out to the bus to get them, I realized I could have done better: I should have given him a choice before we went into the store.
When I handed him the pretzel twists, he held them throughout the store. Maybe sometimes it’s the security of knowing that we can have what we want whenever we want it.
He didn’t eat the pretzel twists until we got back on the bus. But he made that decision.
Thanks to all of you who are also sharing your authentic AAC moments, publicly and privately!