Interview between Speaker 1 (Meg) and Speaker 2 (Alyssa Hillary)
Welcome to the Two Sides of the Spectrum Podcast. A place where we explore research, amplify autistic voices, and change the way we think about autism in life, and in our professional therapy practices. I’m Meg Proctor from learnplaythrive.com.
Meg: Before we get started, a quick note on language. On this podcast, you’ll hear me and many of my guests use identity-affirming language. That means we say, ‘autistic person,’ rather than, ‘person with autism’. What we’re hearing from the majority of autistic adults is that autism is a part of their identity that they don’t need to be separated from. Autism is not a disease, it’s a different way of thinking and learning. Join me in embracing the word ‘autistic’ to help reduce the stigma.
Welcome to Episode 36 with Dr. Alyssa Hillary. Alyssa is a recent graduate from the interdisciplinary neuroscience program at the University of Rhode Island and is also a math teacher. Alyssa is autistic and a part time AAC user, and we’ll talk a lot about what this means throughout the podcast episode. Their research centers on disability and neurodiversity; and the way Alyssa puts it, they have too many research interests. One of those interests is augmentative and alternative communication, or AAC, which Alyssa thinks about as an engineer, as a researcher, and as someone who needs it.
So, in this episode, you’ll hear Alyssa mention brain-computer interfaces, mainly as it relates to their research with people with ALS. But just to define this term in case it’s new for you, a brain-computer interface is a system that lets a person control a computer with only their thoughts. And at the end of the interview, we do talk about if this might become relevant in the future for autistic people as well. So, Alyssa understands AAC in ways most of us never will. And in this conversation, Alyssa really helps us translate their own experiences and research directly into our clinical practice. One thing that may come up for you in this episode is that you might not have the actual training to support your clients to learn and use AAC. Honestly, most of us don’t. But our autistic clients need for us to stop waiting on someone else to show up and do the assessing and the teaching. They need for us to learn how to do it ourselves. And it’s really exciting to do that.
I am thrilled to announce that at Learn, Play, Thrive, we have a new course called ‘Authentic AAC: Implementing Communication Systems for Autonomy and Connection’. This course is extraordinarily practical, and will transform you from wherever you are in your AAC knowledge to a person who can approach your AAC learners with confidence. It’s taught by Kate McLaughLin, who’s an SLP specializing in respectful, and authentic, and effective approaches to teaching AAC. It doesn’t matter if you’re an OT or an SLP, this course will totally change how you approach your clients who use AAC or who could potentially benefit from it. You can find it in the show notes or at learnplaythrive.com/AAC. Here’s the interview with Alyssa.
Hi, Alyssa! Welcome to the podcast.
Alyssa: Thank you.
Meg: I’m so glad to have you here. And I want to start with just a little bit of your story, starting with the current day and working backwards. You are a part time AAC user. Can you tell us what that means?
Alyssa: So, when I say that I am a part time AAC user, what that means is that some of the time I use speech. And at the times when I’m using speech, my speech may even sound not normal per se. I know that I sound noticeably autistic to people who know what they’re listening for, but fluid fluid. And then at other times, speech does not work for me and I need to use other tools to communicate effectively. Speech not being effective could mean that I can’t speak at all, or that I can only get out a word or two at a time which is not going to meet my needs in a research meeting. It could mean a variety of things.
Meg: And how is that different from selective mutism?
Alyssa: The — to an external observer, I’m not certain how it would be different. In terms of the internal experience, selective mutism is nominally associated with anxiety, whereas my variable speech abilities over time basically are not correlated with anxiety. I will lose speech if I am too tired, if I’ve been exposed to too many sensory triggers lately, if I’m physically ill, but anxiety is not one of the reasons that I lose speech because despite how it may appear with my intermittent speech abilities, I don’t have selective mutism. That’s not what it is.
Meg: I think that makes a lot of sense. So, it sounds like you’re saying that producing speech takes a lot of resources and sometimes those resources just aren’t available because you’re using them in other ways.
Alyssa: That’s a pretty good way of thinking about it, I think. What I’ve noticed is that my ability prioritization is most likely atypical. What I mean by that is that, under stress, we all lose skills.
Alyssa: But for me, speech is one of the first things that will give out. Whereas I think that most people would lose, say, graduate level math before they lose speech. I have tested this. Speech gives out first, every time.
Meg: Yeah, that makes a lot of sense. And I think that’s a really compelling message to people who are afraid to introduce AAC because they want their kids, or their clients, or their students, or whoever to use verbal language. And that’s taking away the only way a child could potentially communicate in those moments of stress when they need it the most.
Alyssa: Yeah. When speech is working, I generally use it. But when it’s not, my options are: use AAC, or don’t communicate in language. Speech does not magically become an option again just because I don’t have AAC available. If it’s not available, I’m just in trouble.
Meg: Right. Absolutely. Tell us a little bit about your personal journey. What was it like for you before you started accessing AAC?
Alyssa: Before I started accessing AAC, I would still have times when speech was more effective or less effective, I just didn’t have AAC about it. So, when I say the options are use AAC or have a problem, I know, because I would have a problem.
Meg: Are there things that you have been able to do in your life or that you’re able to do now that wouldn’t be available to you if you didn’t have access to AAC as a communication option?
Alyssa: I am much more confident as a classroom teacher knowing that even if speech gives out, I still have access to language-based communication with which to teach my students. I probably would not want to be in charge of a face-to-face chemistry lab without some form of language-based communication. That seems like it might not be ideal. Knowing that I have AAC as a backup available, and that my supervisors know that that’s a possibility that I may need to use it, I am not concerned about teaching face-to-face chemistry labs, though in the past year there have been other issues.
Meg: Yeah, I love that you started your answer with talking about confidence. Because when there are objections, or sort of deficits-based goals that people want, or things that people in the lives of autistic children are trying to keep them from, if we connect to these big, long term goals, I want them to be authentic, I want them to feel joyful. I want them to be confident in doing the things they want to do. So, what gets them there? I love that you’re helping us connect your access to AAC to your ability to confidently and successfully do the things you want to do.
Alyssa: Yeah. AAC makes it so that I don’t need to worry about whether or not speech will be working before I say I’m going to deal with it.
Meg: Yeah. So, you are an autistic AAC user researching AAC, and a lot of your research gets very deep into neuroscience and some of it is more cultural or ethnographic and I want to talk to you about all of it, or as much of it as we can today. But before we dive in, tell us a little bit about what it’s like being an autistic AAC user researching autistic experiences and AAC use.
Alyssa: All right. So, this depends a bit on which way that I’m researching AAC and/or autism. When I’m doing it with brain-computer interfaces, the fact that I use Proloquo for text is not super relevant. When I’m working on my paper — when I was working on my paper on AAC for speaking autistic adults a few years back and taking the class on augmentative and alternative communication, it was systemically awkward a lot of the time. Some of this was reading papers about AAC or about autism that were definitely written with a specific audience in mind. And I don’t mean researchers as an audience. I’m a researcher. It’s fine for me to read things that assume I’m a researcher. But when they also assume that I am not an AAC user or that I am not autistic, that can be awkward. There are times where I was reading research about interventions where I saw that the intervention was rated as successful, but I also saw that the goal of the intervention was something that I did not necessarily consider ethical.
Alyssa: I saw ‘reducing echolalia’ as a goal and as one example. And helping a person use their echolalia in ways that other people understand as communication. Okay, yes, fair. Reducing echolalia? No, not cool. Communicative echolalia is a thing.
Meg: Right. That ties right back into your point about just taking something away from you doesn’t help you communicate.
Alyssa: Right. Taking tools away doesn’t help. Making tools truly available — which means the support to use them effectively if any supports are needed; and then not being obligated to use any specific tool with the expectation that I will use whichever tool is working for me at the moment — does help.
Meg: It’s interesting hearing about your experience as a researcher. I interviewed Moyna Talcer in Episode 33, who is an autistic OT who is doing a qualitative study of autistic mothers and their sensory experiences.
Alyssa: That sounds interesting.
Meg: Yeah, it is. And as an autistic person, she was able to bring more ethical, and thoughtful, and less traumatizing methodology to her work. And other autistic people I’ve interviewed have talked about just sort of cringing with what they’re going to find when they start reading research. And it sounds like we need more autistic people, or at least people with neurodiversity-affirming lens in research roles so we can start changing how are we studying and learning about and from autistic people, and what are our outcomes. Are they masking in compliance, or are they joyful, authentic participation?
Alyssa: We need research that begins from the basic assumption that autistic people are real human people who do things for real human reasons. This is a phrasing that I’ve probably said before, though I’m not sure how publicly. The reason that I say it as low-level as that is that — trigger warning for Applied Behavioral Analysis and/or Ivar Lovass — in an interview with Psychology Today published in 1974, Ivar Lovaas, pioneer of Applied Behavioral Analysis on autistic people, directly said that ‘autistic children were not human in the psychological sense’. Not only is Applied Behavioral Analysis built on what he did, but a significant portion of autism research that does not directly claim to be Applied Behavioral Analysis is still based on, at some level, the work of a person who straight up said we were not human in the psychological sense. We need people working from the basic assumption that we are in fact people who do things for real people reasons.
Meg: I’m so glad you made that connection. And the same is true in the world of interventions. People say, “Oh, I’m not doing that,” but the work they’re doing and the strategies they’re using are fully and wholly built from behaviorism and from this assumption that autistic people need to be converted into non-autistic people.
Alyssa: Yup. Like, especially intervention, I would say, yeah. Because this guy was an interventionist. He was pioneering an intervention!
Meg: Well, I’m glad to have you in the world of research. And it’s a messy path for how research influences intervention unfortunately, and paradigm change takes a very long time. But I know the listeners of this podcast are some of the people on the ground trying to make it happen, and one of the areas that we’re talking about more and more is AAC. And we’re not just speaking to SLP’s here, we’re speaking to OT’s too, and we’re speaking to people who don’t have the training and expertise yet to support autistic people to use AAC and how and why we should be listening to autistic people and neurodiversity-affirming therapists who can train us so that we can support that process. I want to start by asking you about barriers. What are some of the barriers for autistic people getting and using appropriate AAC?
Alyssa: There are too many. I think that one of the biggest is people literally not knowing that AAC is an option. This is not a problem unique to autism. It comes up with the brain-computer interface research that we do with people with ALS as well; that people who would benefit from some form of AAC, they may not be at a point even of beating a brain-computer interface. We’ll see a special about brain-computer interfaces, start contacting researchers about brain-computer interfaces. And they don’t need that yet. They could use an eye tracker; they could possibly use a keyboard or a switch system. But they didn’t know that any of that AAC existed, so they go straight for the first time they hear about it. If that’s regularly happening, how many people who didn’t see the special still don’t know that any AAC exists? So, the first one is people just not knowing AAC exists. Not an autism-unique problem, but definitely an autism-affecting one.
The second is attitude. No, I would say the prioritization of speech over all other forms of communication. This would be people defining functional speech as the ability to speak at least a certain number of words to make certain requests. This would be people saying that if there’s any speech at all, we should work on that. Which, okay, maybe you can do that too. But ‘able to speak sometimes’ does not mean ‘able to speak always’. Giving people as many tools for communication as possible. There is no prerequisite for AAC. There is no real contraindications for AAC either, except possibly for literally dead. But people act, in practice often, like both too much speech and not enough traditional indicators that speech may be someday coming are contraindications. No. Give it to everyone. Stick an AAC tablet in the classroom, and let students who think it would be useful self-identify by when they go to play with it. And the ones who keep trying to use it are probably the ones who should have their own.
Meg: I love that. And the way you framed it is so useful for me to visualize this narrow window that we’ve created for who can get AAC. Not too much language, but also not too little. And I’m wondering if you can — can you expand on the idea that there’s no prerequisites for AAC?
Alyssa: So, when I took my class on AAC — and this story is actually in the paper ‘Am I the curriculum?’ as well — but it fits with the no prerequisites. They showed us a video. In the video, the speaker pointed out that there are no prerequisites and gave us an example of a student who would frequently not get AAC but should have AAC access. A student who did not speak and had a tendency to sit on the floor, to hide under tables, that sort of thing. I, at the time, was a PhD student. I am no longer a PhD student; I graduated. But I was a student who regularly sat on the floor, sat under tables. I have had research meetings with my major professors during which I was sitting on the floor under a table. So, this example of something that has gotten people denied AAC was viscerally and personally horrifying. What I’m hearing when I hear that is more of the idea that people, even people like me, aren’t supposed to make it through as it’s currently designed. I mostly speak. I was in Gen Ed classes throughout. But what I’m hearing is, if the systems of Disability Services had known about me, they would have said that I’m too disabled for AAC. This isn’t a statement that I don’t belong there. This is a statement that they’re hurting everyone who’s there. I made it through to say, “Oh, my God, that’s bullshit.”
Meg: Yeah, it ties directly into not presuming competence, to just assuming that a non-speaking person doesn’t understand, doesn’t have anything to contribute, couldn’t use AAC if you gave it to them, when we have no idea.
Alyssa: We have no idea. Like, it’s not that I shouldn’t be in that system. It’s that that system shouldn’t be.
Meg: Yeah. Yeah. I know one barrier that can come up is that robust AAC systems are generally technologically-based and use, often, symbolic pictures. And we do have this job as therapists to figure out what types of instructions are going to be most meaningful for a child, what kind of schedule — I’m often using objects instead of symbolic pictures for things like schedules, because it’s easier for the child to make that connection. And for AAC, which is highly motivating for a child to be able to communicate, and be understood —
Alyssa: Until they might get at that.
Meg: — they might still be able to learn symbolic. Yeah, we don’t need to be checking to see can they match pictures before we see can they use AAC.
Alyssa: Yeah. Like, until — like, until you make it a test, until it becomes a ‘use this to say the words I want you to say’, until they learn that even the communication they make will be ignored — because those are all things that happen, unfortunately. But as long as those things don’t happen, the ability to communicate is itself some something that people largely do care about. Human connection.
Meg: Mm-hmm. Yeah. There’s really a lot we could unpack there. This point that often —
Alyssa: So much.
Meg: Yeah. Often our potential AAC users aren’t using their AAC because we’re not teaching it right. We’re ignoring the ways they are communicating; we’re telling them what to say on their device. We’re not modelling it, we’re instead prompting and doing this bizarre thing where they communicate in another way and then we hand them their device and say, “Say it on your device,” whereas they’ve just said it in another way. There’s a better way to be teaching.
Alyssa: If you understood them the first time, you understood them the first time. If you actually don’t understand what they wanted to tell you, requesting clarification can be reasonable. But even then, the device is one option for this clarification, may or may not be the one the student wants to use. And frankly, I’m not super impressed by a lot of how people describe modelling anyways. Like, it seems to be the way that a lot of modelling is described sounds like intervention, not like language learning.
Meg: Can you say more about that?
Alyssa: When we speak to babies who are assumed to be typically developed, and whether or not they actually are, we speak full sentences to them with our mouths. We may also speak baby talk around them at some level, but we are exposing them to full, complete, grammatically correct sentences, communicated using the tools that we think they are going to use. Modelling AAC, see people frequently, if they use full sentences at all, speak them with their mouths, and only hit one or two key words with the AAC. That’s not how the AAC user’s going to be communicating, ever, in their life, realistically. If they need AAC for the rest of their lives, they’re either going to be communicating by putting their full sentences into the AAC, or if they’re going to be using speech, speaking in full sentences. Or they may be composing full sentences with the AAC and speaking them aloud, but speaking a full sentence while hitting key words with the AAC device is not how that’s going to happen. This is not modelling anything real that students are going to be doing later.
Meg: That is such a good point. And I’ve seen two, three-year-old’s navigating through multiple menus on their AAC devices, making their two-word sentences and then their three-word sentences, and then their longer sentences. But I think another barrier here is technology. People are scared of young kids and technology. ‘What if’ — I’m air quoting here — like, what if they just play on it? What if they just push the buttons?
Alyssa: Has nobody heard of babbling?
Alyssa: That’s a stage of language development. Has nobody heard of babbling? Please, learn about language development, learn about the babbling stage, learn about gestalt language development, learn about analytical language development, learn about how that’s kind of a spectrum between the two. Learn about how echolalia is not unique to autism, but is generally a hallmark of somebody being further towards the gestalt side of things, which I may be mispronouncing. I’ve only read that word, not spoken it before.
Meg: I think that’s right. Yeah, this is a great response. And I have said this before, earlier in my career. I’ve said, “No, I don’t do screens with two-year-old’s,” and probably some of our listeners have too. And when we know better, we do better, right?
Alyssa: I mean, my honest take on that is, why aren’t you doing screens with two-year-old? I had screens as a two-year-old, I had screens — I went to computer classes when I was a preschooler. As far as I know, I had access to a computer around when I started talking and I was not speech delayed. My speech has always been wonky, but I was not delayed.
Meg: And now you’re a researcher with a PhD researching brain-computer interfaces. If that’s not an example of how access to the things you’re interested in has helped you build your actual life, I can’t think of a better example.
Alyssa: Yeah, papers. Oh, God. I got to revise and resubmit that yesterday, so.
Meg: Oh, stressful. [Laughs] I want to linger on technology a little bit. Are all of your favorite AAC systems technologically-based?
Alyssa: The ones that I have used the most throughout the pandemic are all technologically-based. But when I was a master’s student in math, the AAC solution that I used as a student and as a tutor was a whiteboard marker. And also, the whiteboard that was already in the classroom. As a tutor, I was up and about, I was standing near the whiteboard. As a student in the classroom that I was in, there was one seat that was within reach of the side whiteboard. I claimed that as being my seat forever and ever. It is mine, you cannot have it. I brought a whiteboard marker to class. And I was sufficiently communicative this way that I was on one occasion told to be quiet while not actually capable of speech and not actually making a sound.
Meg: [Laughs] Okay, so sometimes writing, not typing.
Alyssa: Yeah, mostly typing, especially recently, but definitely writing. I generally have not used icon-based systems, that’s probably more because I’m hyperlexic than anything else.
Meg: Yeah, and there is a critique of those as a long-term option because we’re limiting what people can communicate if they aren’t able to add their own information, and words, and phrases to the device.
Alyssa: As long as there is keyboard access available, I’m actually pretty okay with a primarily icon-based system being a long-term solution because many people who I’ve communicated with who use AAC that they set up themselves as adults use, or go back and forth between, icon and text-based systems as they needed to for a variety of reasons. Like, I disagree with the concept of icons, then text. I actually think icons, whenever they’re useful; text, whenever it’s useful. I don’t care if that means icons for the vast majority of the time, text for the few words that aren’t in the device and haven’t been added yet, or if that means text all the time, icons literally never. Both available, the person who uses AAC will figure out the balance that works for them.
Meg: I love this clarification. So, having access to typing, learning how to type, ensures that a person can say what they want to say when they want to say it and isn’t limited by their icon system, and they can decide if and when the icon-based system is easier, is more accessible, works better for them.
Alyssa: I mean, most people can make two or three hits to get to an icon faster than they can type the word that the icon represents, especially if it’s a longer word. Speed is a real concern for AAC use for a lot of people because not everyone actually waits.
Meg: Yeah. I wonder, too, about the cognitive demands of typing and spelling in moments of stress. And I’m not autistic, I’m not an AAC user, so the best analogy I can create for myself is that I am conversational in Spanish, but if I’m stressed, or anxious, or frustrated, I can’t access Spanish very easily and I need to go back to English in those moments. And I hope my instructions and my ability to communicate in an emergency can happen in English, and I wonder if there’s a parallel process with accessing spelling and more complex things.
Alyssa: I’m probably the wrong person to ask about that one. I’ve heard people talk about choosing based on overload, but spelling is not really a thing that I lose.
Alyssa: Though more interestingly, related to your actual example, I speak both English and Mandarin Chinese. I lose or have the ability to speak in these two languages, not quite independently in the mathematical sense, but independently in the laypeople sense.
Meg: Interesting. Interesting. Yeah, my brain is just running around that in circles. Has there been a time that you could access your Mandarin and not your English?
Alyssa: Written language? No. Spoken language? Yes. When I say that this is my ability to speak, I’m quite specific that it’s about oral speech. But yes.
Meg: Got it.
Alyssa: This can happen.
Alyssa: Why, I took an exam of oral proficiency in Mandarin Chinese. Well, spoken English was not working. And I tested as advanced high proficiency by the standards of the American Council for the Teaching of Foreign Languages while English was just not working.
Meg: [Laughs] I guess that could have gone worse. Could have been the other way around. This just really brings us back to your point that people need access to a range of options for how to communicate and the ability to choose what works for them in that moment.
Alyssa: I’ve had very few cases where I would even know if Chinese was working when English wasn’t.
Alyssa: But I’ve had a lot more opportunities to know that English is not working.
Meg: Sure. Yeah. It’s not exactly the same, but it can reduce some of the, I think, stigma to think about the ways that we all use AAC. I was thinking about my four-year-old who’s not autistic. When he needs to apologize, sometimes he will dictate a note to me, and I will write it and he will hand it to the person he needs to apologize. And that’s easier for him than saying the words, and that is great. Sometimes, I will send a text. Usually most of us, right? This is all AAC. And it’s so interesting that when it’s this neurotypical way of doing it, it’s fine. And when it diverges from that, it’s somehow stigmatized.
Alyssa: When a part time AAC user sends a text, is that AAC?
Meg: Is this a riddle? Is it?
Alyssa: It’s one of the questions that I went in circles about in ‘Am I the curriculum?’. I never came up with a good answer. Because when somebody who never speaks sends a text, it’s clearly AAC. When anybody sends a text, the most inclusive definitions of AAC would say that it is. But the less inclusive ones would say that for most people, it isn’t. The ones that call it a clinical practice will generally say no. So, if most people would say ‘No’ when it’s someone who can always speak, but ‘Yes’ when it’s someone who can never speak, what happens when it’s someone who can sometimes speak? Maybe?
Alyssa: I honestly think that handing the person the ability to text, handing them access to all of these tools that may or may not be considered AAC but are non-speech options for communicating is more important than deciding whether or not it counts as AAC. But that asking the question is probably a good idea anyways.
Meg: Yeah, I love that. Alyssa, I want to circle around to your work as a neuroscience researcher. How does your neuroscience research tie in to AAC?
Alyssa: Okay. So, what I worked on for my dissertation was trial-to-trial variation in the timing of a brain response that gets used to control a brain-computer interface. So, let’s say that I — you know what, reaction times. I think that if I throw a ball at you, the amount of time it takes you to start moving to catch that ball will vary a little bit from time to time, that’s the idea. Also, I would need very good aim and a very strong arm, and I have no idea where you actually are, but predict that I could throw a ball. This kind of variation, but at a neural level, is what I was looking at. The thing that they’re reacting to is essentially a switch system. So, we have a grid of, depends on the context, but ours was six-by-six. It has all of the letters, it has all — the letters and numbers start flashing, and rows and columns. The thing that they’re reacting to is the character that they wanted to type has flashed. The amount of time after the flash that it takes for their brain to have the reaction we’re detecting with the computer varies a little bit from trial-to-trial, or from time-to-time that the character they want flashes. My dissertation was focused on that. This is related to AAC because one, grid of letters and numbers. You can use this to spell things you can use this to write. It’s literally an AAC system. Two, this trial-to-trial variation is a significant issue for how good the computer is at telling which letter you were trying to type. So, if you have too much of this variation, the computer is currently not quite sure how to handle it. If you have less, then the computer has a better idea of what you were trying to type.
Meg: So, what are the implications?
Alyssa: The implications, since trying to give people less variation is not really within my skill set and could have unintended side effects, the ‘So what?’ has a few aspects. One, hey, we have figured out one of the reasons that brain-computer interfaces can be harder for some people to use. Two, now that we’ve identified the problem, what can we actually do about it? I hope that it is possible to come up with algorithms that are better able to detect responses that happen a little earlier or a little later so that this variation is less of a problem. It would be nice. Alternatively, we could look into brain-computer interfaces for communication that rely on other responses that have less of this variation. The one that I used for my dissertation, while it is effective and has been used effectively, including at home by people who have ALS to communicate, is kind of notorious for having this kind of variation as far as the cognitive scientists are concerned. And that variation is increased in a lot of neurological conditions, including ALS. So, maybe using a response that has less of that would help.
Meg: Where do you see this going? Where do you see brain-computer interfaces, specifically for autistic people, where do you see that going in the next decade or so?
Alyssa: I mean, I’m not super familiar with brain-computer interfaces for autistic people because my dissertation was on them for people with ALS. My understanding is that at this time, brain-computer interfaces aimed at autistic people tend to be intervention tools rather than augmentative and alternative communication tools because autistic people by and large can move, and therefore we assume that autistic people can use other selection methods to use AAC effectively. I don’t know if that is always born out in practice. It may be that some autistic people actually would benefit from brain-computer interfaces for communication because of motor issues, that may mean even if we are making movement happen, we may or may not have sufficient control over the movement that is happening. For AAC, I actually think this is particularly relevant for autistic people who might currently be using eye gaze and are having difficulties with the eye gaze. Not in that they can’t direct their gaze to what they want to select at all, but that maintaining their gaze on it consistently for the entire time needed to make a selection may be difficult because, at least for the kind of brain-computer interface that I work on, having some gaze control really does matter. But if your eye gaze jitters off for a fraction of a second during the five seconds that you’re making the selection, depending on when it was in those five seconds, your eye tracker may fail to make a selection but your brain-computer interface might still make it.
Meg: This is so interesting, Alyssa. I feel like this conversation today could be six or seven different whole episodes with the territory we’re covering.
Alyssa: You want me back?
Meg: Yes. Yes, come back, please. I want to ask you as we wrap up, if people have one kind of overarching takeaway from our conversation today, what’s the main thing you hope people have gotten from this talk?
Alyssa: Offer the AAC to everyone. Yes, literally everyone. Don’t push it on the people who ignore it, but offer it with the actual support needed for them to learn it, to everyone. The ones who don’t need it will continue to ignore it. The ones who’d benefit may actually pay attention eventually. If you’re not, you know, an asshole about it.
Meg: [Laughs] Thank you, Alyssa. Tell us what you’re working on now and where we can find you online.
Alyssa: All right. I have a not super updated blog, yesthattoo.blogspot.com. I have a page on academia.edu. I have — I always forget how to spell this — I think it’s ORCID, it’s an open researcher ID of some sort that also lists a good bit of my work. I’m on Twitter, at @yes_thattoo. There’s an underscore in there somewhere.
Meg: I’ll check —
Alyssa: Clearly, all of these things are going to need to go in text in the podcast details.
Meg: Yes, I will track it all down and link to it in the show notes.
Alyssa: Oh, I can send it to you. I just — oh, God, how do I spell this in the instant that you asked.
Meg: I will get it from you and I will put it in the show notes. Thank you so much, Alyssa.
Alyssa: Thank you. And oh, one thing that I would like to go into at some point. I am not always able to post full text copies of my research papers for copyright reasons. However, if you contact me and ask me for them, I am always able to send them. Please, please ask me if you want to read it and have a paywall.
Meg: That is excellent. So, I’ll put your email address in the show notes as well. Thank you so much for that. That’s so important.
Alyssa: Thank you.
Meg: All right. I’ll see you next time when we have you back on the show.
Alyssa: All right. See you, take care.
I hope you enjoyed that interview as much as I did. If you’re listening to this episode before November 14th, and are interested in learning more about teaching AAC, we have an amazing free training coming up for you. It’s called ‘Moving Beyond Compliance-Based Strategies for AAC: Why and How to Support Autonomous Communication’. And it’s taught by Kate McLaughlin, who is an SLP specializing in AAC. This free training will take place on November 14th at 3pm, Eastern Time and there’s a recorded replay for everyone who registers. Visit learnplaythrive.com/AACwebinar, all together as one word, to grab your spot. That’s learnplaythrive.com/AACwebinar. Hope to see you there!
Thanks for listening to the Two Sides of the Spectrum podcast. Visit learnplaythrive.com/podcast for show notes, a transcript of the episode, and more. And if you learned something today, please share the episode with a friend or post it on your social media pages. Join me next time, where we will keep diving deep into autism.