A lot has been written in the past few years about how artificial intelligence platforms like ChatGPT can be beneficial to neurodivergent learners, but much of what I’ve read is more tech hype than reality. This is especially true in the land of edtech, where promises of chatbots supporting ND students are rife, but evidence is incredibly thin on the ground.
I have a combination autism/ADHD diagnosis, and in the fine print of that diagnosis, a few very specific neurological traits that certainly shape the way I interact with these technologies. I’ve also spent almost 20 years in education, and worked with many ND students. I spent several years as a non-executive director on the board of Reframing Autism, a national nonprofit that provides education and support to autistic adults, and yet it never really occurred to me until now to write this particular article.
I suppose, trapped inside my own head, I never really thought that I use AI as a “assistive technology.” But if I zoom out and try to look at my patterns of use a little more objectively, more like I might with a student in my class using a technology, then I can see some pretty obvious ways where AI fits the bill.
In this post, I’m going to cover areas of my personal use of AI which I believe would be useful for anyone, autistic, ADHD, ND or otherwise, but particularly well suited for others who share a similar neurological makeup. First though, I’m going to talk about why AI is often a horrible technology for people like me.
Pathologising Pinball Machines
Gen AI such as large language models have two incredibly harmful pitfalls/risks for ND users. The first is that they are trained on a large corpus of often pathologising, inaccurate, and at times harmful information about neurological and developmental conditions, and specific conditions like autism. The second issue is that they can be incredibly addictive.
First of all, the training data problem. Large language models like GPT get the bulk of their training material from the internet, a resource which has grown since the 1990s but which is still far from a complete record of the whole of civilisation. The internet archive privileges a certain type of worldview, and the output of chatbots like Google Gemini or ChatGPT tend towards the most probable, predictable point of the distribution curve of the training data. So when a large portion of the training data contains harmful or inaccurate representations of neurodivergence, such as common fallacies like autistic people can’t make eye contact, or ADHD people can’t complete tasks, or dyslexic people can’t be writers, and so on, these fallacies end up baked into the chatbots. Even when guardrails are put in place to discourage chatbots from outputting blatantly discriminatory content, the weight of the input has a profound effect on the output.
I am constantly encouraged by AI systems to do things in certain ways or think through problems from a certain perspective, based on flawed assumptions of what it’s like to be inside my head. For example, if you begin a prompt with a chatbot and mention ADHD, or if it has come up in conversation and been retained by the model’s “memory,” there is a high likelihood that subsequent conversations with that chatbot will recommend things like time blocking, breaking down projects into manageable tasks, and other common ADHD productivity hacks.
Some of these methods are already being monetised and aggressively promoted through platforms like Motion. I have no doubts that products like these work for some people, but it’s certainly not true for everyone, and it isn’t true for me. When AI defaults to the most probable output of “what would be useful for an ADHD person”, I find it at best condescending, and at worst, dangerously wrong. Consider, for example, Claude’s offer in a recent chat to become a “digital second brain” so that I never have to remember anything. I can’t see how that would be useful advice for anyone.
There are more harmful instances out there. The treatment of autistic children in some countries, including the US where most training data originates, has long been contentious with the autistic community. Certain practices have been recommended for years, including in academic literature and in the online sources that train language models. Should the models surface this advice, they put autistic children and adults at risk of genuine harm. The issue is not just the practices recommended, but the shallow way in which they are reinterpreted by the chatbot. Applied Behaviour Analysis (ABA), for example, can be very “unsafe” when used in a punitive or generalised manner, and products like ChatGPT are not capable of nuance.

Perhaps a more tangible, genuine harm from these products is just how damned addictive they are. This is, of course, by design. Chatbots are pinball machines: fast, shiny, noisy things that get right into your dopamine factory. They’re designed this way, owned as they are by the same companies that have given us social media for the last decade and a half. Dark pattern UX, like asking follow-up questions at the end of every chat message, encourage users to stay on for as long as possible. There are already links between neurodivergent individuals and increased levels of addiction to digital platforms. The reward centres of our brains are wired differently, and the incessant dopamine hits you get from likes and notifications on social media are replicated in the frictionless, instant responses of chatbots.

Photo by Matheus Bertelli on Pexels.com
I’ve found this most recently in my experiments with Claude Code. I mentioned in a post a week ago that I’d created five websites in a matter of hours, taking existing content that I’d published elsewhere and turning it into polished sites. Being able to get the ideas out of my head and into production is one of the advantages I’ll talk about in a moment. But once I’d done it once, and then twice, and then four times and five, I felt that I was reaching a point where if I wasn’t using AI to produce something, I felt as if I was missing out.
A strange kind of ennui settled in, a listlessness, that feeling you get when you think you’ve forgotten your keys, like I should be doing something: I should be making something with AI. Otherwise, what am I doing here? I’m wasting my time. These are obviously harmful patterns of thought.
I will say that those negatives can be avoided by someone who knows what to expect and how to deal with them. Once I registered those addictive patterns of behaving with Claude, I was able to turn it off, and when I notice unhelpful advice from chatbots, I know to ignore it. Using these technologies with younger learners runs the risk that they don’t have those skills, and that’s probably a conversation for a future post.
But for now, let’s look at the ways I find these technologies useful.
Positive Uses of GenAI
Capturing Ideas
The ADHD part of me loves bouncing around on a stage, running workshops, and generally carrying on like a pork chop, as my mother-in-law says. I can happily spend two or three days back to back, six hours a day, running sessions and delivering keynotes. The autistic side of me hates this and then shuts down for a week afterwards. I become introspective, nearly mute, and try to spend at least a few days in my house on a farm, 35 kilometres away from the nearest other people.

There’s a sweet spot at the end of the high-energy engagements where I find that I become very productive and creative. Unfortunately, because I live in the middle of nowhere, those sweet spots usually occur as I’m on a five hour drive, sitting in the car, returning from face-to-face engagements. My brain, fuelled by my good friends dopamine and adrenaline, runs at 1000 kilometres an hour (luckily, the speed limit here usually doesn’t go beyond 110).
In the past, this has led to a certain amount of frustration. I’d have ideas – surely great ideas – in the car on the ride home, and short of pulling over to write them down, I would lose those ideas, and in the following few days of introverted stupor, would frequently feel resentful that these things had slipped my grasp and I couldn’t remember them anymore.
So a common use of artificial intelligence for me is to now record those moments of hyper-creativity as voice memos, and then transcribe them. And as a further step, run the messy transcript through an LLM. More recently, I’ve even been able to add Claude MCPs into the equation. I wrote about this a couple of weeks ago, and you should check that article out for more technical information. But essentially, MCP allows Claude to connect to other applications, so I can verbally dump ideas at lightning speed into a voice memo, hit transcribe, and then have Claude review the ideas, suggest an order of importance and feasibility, and even add them to my Todoist app in undated, pressure-free projects like “Q1 Ideas.” When I’m feeling a little bit more level-headed, I can revisit these and decide whether they were actually good ideas or not, or whether I was ranting like a loon, talking to myself in the car.
More Voice to Text
I have two modes when I’m working: hyper-focus and zero-focus. I can sit at a desk to the exclusion of everything else, including eating, drinking and going to the bathroom, for six to eight hours if I’m hyper-focused on a task. This is how I wrote most of my PhD thesis. Having the luxury of that amount of time to just sit down and do a thing is great, but there’s no grey area for me. It’s either that or I have absolutely no focus and no ability to stay on a single task.
When I’m in the latter kind of mood, I find it very difficult to write. It’s not that the ideas stop coming; much like my drive home, sometimes there are too many ideas rather than too few, but I don’t have the attention span to sit down and actually commit them to the page.
For the past three years, I’ve been using voice to text to write articles verbally while I’m running, walking, or driving. For me, this feels like a very natural way to write articles. My blog posts are structured and voiced very much like spoken text anyway. I think in paragraphs, and I usually have an idea for an article before I sit down to write it, meaning that if I leave the door with my AirPods, phone, and a rough idea for an outline, I can return 45 minutes later with a verbally drafted article. Using AI to then do the formatting, line edits, and get it from voice memo to published blog post is a fairly straightforward task that generally only takes me 10 to 15 minutes, which even I can focus on.

Being physically out and moving, whether on foot or on wheels, forces my brain and my body into a space where I can get the article out in one piece, rather than the fragmented approach I sometimes take sitting at a keyboard. I don’t know whether it’s the act of movement or the fact that sitting behind the steering wheel I literally can’t use my phone to open multiple tabs, but something about being mobile makes it much easier to get the job done, and that is entirely facilitated by AI transcription tools and language models.
Social Stories for Grown-Ups, or Why Conferences Are Not a Vibe
I mentioned the joy and high energy of presenting at conferences. Speaking and presenting is something I genuinely enjoy doing as often as I’m able to, but there are plenty of things about conferences in particular that my mind recoils from. Sprawling, noisy, chaotic conferences like EduTech fill me with anxiety, despite the best efforts of the events management teams. And I know some of these people personally, and they do great work. These events are anathema to my autistic self.

You arrive at a conference amid a torrent of other meandering attendees. You try to decipher the printed signs, placards, banners, LED screens and TVs mounted on the wall, which all seem to be giving you competing directions. You know you have to register, even though you feel like you already did that online, and despite having a printed copy of the ticket in your bag, you’re told that you have to go and get a different printed lanyard.
If you’re a presenter, your lanyard comes from somewhere else, so you can spend another half an hour finding where that “somewhere else” is. Because conferences often feature rotations of presenters moving through the same room, sometimes every 20 minutes, getting in, doing tech setup and getting out again is just as complicated as finding your lanyard.
Enter Social Stories for Grown-Ups™. Social stories are often an effective way of helping young ND people to navigate life. I’ve seen social stories for everything from starting kinder through to how to ask a teacher for help, and basics like going to the grocery store. But reflecting on my use of AI, I realised that I spend a lot of time creating social stories for myself for things like conferences.
If you fire up Google Gemini or Claude and tell it that you’re going to an event at a particular well-known location, provide the itinerary, maybe a link to the conference website, and even throw in a few emails back and forth between yourself and the conveners, you can create a remarkable quick-start guide:
- Arrive at conference centre on the eastern side.
- Enter through Gate 1, park and head upstairs, emerging in Area H. Proceed 500 metres left to sign-in table for presenters. Acquire lanyard.
- Visit tech setup room, two doors on the right. Your session is at 1:30pm in Room 214. Take escalator to second floor. Follow signs for Rooms 200-222, fourth or fifth room on left.
- Tech setup required: HDMI input (this is fixe with your MacBook Pro), copy of slides presented as handout to participants.
- When you’re finished, the nearest cafe outside of the conference centre is two blocks away. Head there immediately.
I cannot understate how helpful it is to have clear directions for complex social interactions, even as a more-or-less fully formed adult human.

Photo by RDNE Stock project on Pexels.com
Other Uses
Those three areas, the idea capture, the voice transcription, and the social storying, are probably what I would call my three most assistive uses of AI, but there are plenty of other ways I interact with the technology. Most of these are ad-hoc uses responding to social or work interactions, which are my Achilles’ heel.
For no reason other than my own amusement, I will present them as titles of Friends episodes:
- The One With The Email That I Couldn’t Follow Because It Was Too Long
- The One With The Text Message That Seemed Sarcastic But Maybe It Wasn’t
- The One With The Reason I Shouldn’t Engage In That LinkedIn Argument
- The One Where I Have To Make A Phone Call And Need A Script
- The One Where I Need To Say No But Don’t Want To Cause A Scene
- The One Where Someone Changed The Plan At The Last Minute
- The One Where I Need To Publish 5 Websites Right Now
I could go on, but I think the pattern is clear. The most useful applications of AI for me as a neurodivergent person have almost nothing to do with what edtech companies or AI developers are promising. Nobody is selling “AI generated social stories for adults who find conferences overwhelming” or “capture your manic car ideas before they disappear forever.” The genuine assistive value I get from these tools is deeply personal, cobbled together from my own patterns and needs, and almost entirely reliant on voice interfaces and plain language models rather than purpose-built “ND support” products that turn my calendar into a chaos-rainbow.

That’s probably the most important takeaway here. If you’re ND and curious about AI, ignore the hyper-productivity gurus and the shiny ADHD apps. Instead, pay attention to where you already struggle, where the issues crop up in your day, and experiment with something like a voice memo and a language model can help, even a little. And if it doesn’t work, or if you notice yourself sliding into that dopamine-chasing ennui I described earlier, turn it off. The best assistive technology is the one that helps you do the thing and then gets out of the way.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply