There’s a problem with GenAI that we’re not talking about enough. It’s not hallucinations, or bias, or the environmental cost of training large language models – though all of those issues are of course important.
It’s the fact that most people have no idea what the technology can actually do. That’s not our fault: it’s a design problem, and it has a name.
The Discoverability Problem
Bear with me, because I’m about to go off on a journey through a discipline most of us encounter every day, but have no direct involvement with: User Experience (UX) design. UX is a field focuses on making software easy and intuitive to use. It’s also concerned with aesthetics, functionality, and even joy. It’s the thing that makes the classic iPod wheel click. The perfectly timed ‘skip intro’ button on Netflix. Using your face to log in to… everything.
And it’s something which AI developers, apparently, are hopeless at.
In user UX design, “discoverability” refers to how easily people can find and understand features within a system. The concept was popularised by Don Norman, the cognitive scientist behind The Design of Everyday Things, and has been a foundational principle of interface design for decades. Nielsen Norman Group‘s article offers a useful distinction between two related but separate ideas: findability, which is whether users can locate something they already know exists, and discoverability, which is whether users encounter functionality they didn’t know about in the first place.
Traditional software handles discoverability through menus, buttons, tooltips, and visual cues. Microsoft Word has a ribbon full of features. Photoshop has toolbars. Even your phone’s settings app gives you a scrollable list of everything you can configure, with a text search box at the top for quick access. These are what designers call affordances: signals that tell you what’s possible.

GenAI has barely any of these signals. A typical AI chatbot app has a text box and a blinking cursor. There is nothing to click, nothing to browse, no menu of capabilities. The interface gives you zero signals about what the system can do. In UX terms, this is a terrible discoverability failure.

Research from NN/G in 2025 confirms this plays out exactly as you’d expect. In usability testing of AI features across major platforms, the majority of participants didn’t expect AI features to exist, didn’t notice them when they encountered them, and couldn’t find them even when directed to look. As the researchers noted, users’ mental models simply don’t include AI capabilities as something to look for.
Looking for the Instruction Manual
In the past few years I’ve worked with over 300 schools on AI professional development, and the pattern is consistent from the beginners through to the “fast movers”.
Most educators’ first interaction with GenAI was ChatGPT, sometime in late 2022 or early 2023. They opened it up, saw a clean interface that looked a lot like Google, typed in a question or asked it to write something, and got a response.
From that moment, their mental model of what AI is was set.
If your first experience was asking ChatGPT to summarise a document, then AI is a summarising tool. If you used Microsoft Copilot as a replacement for Bing, then AI is a search engine. If a colleague showed you how to generate a quiz, then AI is a quiz generator. That first interaction becomes a ceiling. You file the technology under a category, and stop exploring.
I’ve written before about the fact that AI can probably do more than most educators think. But the issue isn’t just that people underestimate the technology. It’s that their mental model actively prevents them from seeing what else is possible. You don’t spontaneously think to ask a “writing tool” to build you a web application, convert a file format, analyse a dataset, or connect multiple software systems together.
This mental model extends to frameworks that offer a variety or “menu” of ways to use GenAI. Providing a list of ways to pose research questions, summarise texts, brainstorm, or evaluate seems helpful. But all of these tasks start from the same flawed assumption: GenAI is primarily a text-based chatbot interface. Text in, text out. Transactional and, with the technologies we have in 2026, wrong.
I know this is a problem, because many of my 2023 discussions of GenAI held the same insufficient mental model. My earliest posts like Practical Strategies for ChatGPT in Education centred on the titular application, and offered prompts that were almost exclusively text-based. That was partly a limitation of the technology at the time, and partly a limitation of me.
If someone whose full-time job it is to understand and explain this technology was constrained by the same mental model, it’s no surprise that educators with less time and fewer reasons to explore are still working within the same boundaries. The question is whether the developers themselves have done anything to help.
Platforms like ChatGPT and Google Gemini have experimented with prompt controls and conversation starters to address this, adding buttons and suggested tasks to help users discover capabilities. But for many educators, these interventions have come too late. The mental model is already locked in.

Beyond the Chatbot
GenAI is not a chatbot. It’s not even a category of software in the traditional sense.
Even Microsoft Copilot, which is limited compared to other platforms, includes a vision language model, image recognition, reasoning capabilities, and the ability to write code. Products like Claude, ChatGPT, and Gemini go further, with file handling, data analysis, image generation, and increasingly agentic capabilities that allow AI to use tools and complete multi-step tasks semi-autonomously.
Follow that logic to its conclusion. Everything that happens on a computer happens through code and software. GenAI can write code and manipulate software. Therefore, in principle, anything that can be done on a computer can potentially be done by GenAI. It’s not a chatbot: it’s an interface.
As I argued in The Myth of Inevitable AI, we shouldn’t accept technology companies’ framing of what this technology is for. But equally, we shouldn’t let our own limited first impressions define the boundaries of what’s possible.

If You Know, You Know
So what do we do about a technology whose capabilities are essentially unbounded, but whose interface gives us no way to discover them?
We start from the assumption that we don’t know. That our mental model is incomplete, that our first experience with the technology showed us a fraction of what exists, and that there is much more to discover. This is the IYKYK principle: if you know, you know. Once you discover one capability that breaks your existing mental model, you start asking different questions about everything else, and lift the ceiling a little. You move from “can AI write me an email?” to “what else can this thing do that nobody told me about?”
The discoverability problem in AI isn’t going to be solved by better interface design alone. It requires a deliberate shift in how we think about and engage with these systems, starting from curiosity rather than assumption.
In the next post, I’ll share some concrete examples of capabilities that broke my own mental model, and, I suspect, will break yours too.
This is the first in a series of posts exploring the IYKYK concept. Subscribe to the mailing list for updates.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply