So far in this series I’ve argued that GenAI has a discoverability problem, shared some examples that broke my own mental model, and explored how access and equity shape who gets to discover what. In this post I’ll talk about what happens after discovery, because knowing that a capability exists and being able to use it are very different things…
Usability
Discovering that AI can analyse a complex spreadsheet, build a web application, or synthesise research across dozens of sources is one thing. Actually getting it to do those things reliably, in a way that produces useful results, requires a kind of expertise that the technology itself does nothing to teach you.
I’ve written before about the three dimensions of expertise needed to use AI well: domain, technological, and situated. You need enough domain knowledge to evaluate whether the output is any good, enough technological knowledge to structure your interaction with the platforms effectively, and enough situated knowledge to understand how the result will land in your particular context, whether that’s a classroom, a staffroom, or a school leadership meeting.
The usability problem shows up constantly in professional development. I’ll demonstrate something in a session, like using an AI platform to build a simulation or a data dashboard, and get a room full of nodding heads. Then, later in the workshop, half the participants report that they tried it and “it didn’t work.”
When we dig into what happened, the issue is often that the participant only has access to a different platform, or a nerfed enterprise variation, or a restricted account. Sometimes, variations in the randomness of the AI itself lead to confusing errors. And other times, things which were demonstrated five minutes ago literally just… don’t work.
This is compounded by the fact that AI platforms offer almost no scaffolding for complex tasks. If you want to use Excel’s Copilot to do something sophisticated, you need to know enough about spreadsheets to describe what you want precisely. If you want Claude to help you build a data analysis app, you need to understand what questions you’re trying to answer before you start. The technology amplifies expertise; it doesn’t replace it. I’ve argued before that this is precisely why novices are most at risk when using AI, and the same logic applies to educators who are novices with the technology itself.
In user experience (UX) terms, usability is about whether a user can accomplish what they set out to do without unnecessary friction. GenAI has extremely low usability for complex tasks, because the gap between “I know this is possible” and “I can make this happen” is filled with invisible prerequisites that the interface never communicates. You have to already know a lot to get value from something that looks like it should be simple.

Transferability
Usability is the challenge of getting one thing to work. Transferability is what happens after that, when the experience of successfully completing one task changes how you think about every other task.
This is where the IYKYK concept of these posts really comes into play. When a teacher discovers that AI can analyse a dataset of student results and identify patterns, they don’t just gain a new app for data analysis. They gain a new question: what else in my work involves patterns that I’m currently identifying manually? Attendance data, behavioural incident logs, feedback trends across a cohort. The capability transfers not because the AI “feature” transfers, but because the teacher’s mental model has shifted.
I see this happen all the time in schools and universities. A curriculum leader learns to use AI to audit assessment tasks for alignment with learning outcomes, and within a few weeks they’re using it to review scope and sequences, analyse student work samples, and draft professional learning plans. Nobody taught them those additional use cases. The first successful interaction gave them permission to ask “what else?” and the situated expertise they already had as experienced educators did the rest.
This is also why the three-dimensional model of expertise applies to professional development with GenAI. Transferability depends on having strong foundations in at least one dimension. A teacher with deep domain expertise can transfer AI capabilities across their subject area because they understand the underlying structures of their discipline. A teacher with strong situated expertise can transfer what they learn about AI into relationship management, communication, and pastoral care. Technological expertise alone, without the other two dimensions, tends to produce someone who can do clever things with AI but struggles to apply them meaningfully.
The implication for schools is that professional development should focus less on teaching specific AI use cases and more on supporting that first breakthrough, the moment where a teacher’s mental model shifts and they start generating their own use cases. Everything after that initial shift is transferable, provided the teacher has the domain and situated expertise to carry it.
If You Know, You Know
The five abilities I’ve been discussing in this series, findability, discoverability, accessibility, usability, and transferability describe a journey that every individual goes through, in roughly that order, as they develop a more complete understanding of what GenAI can do.
You have to be able to find the obvious things (findability). You have to encounter new things you didn’t know about (discoverability). You have to be able to get to them (accessibility). You have to be able to make them work (usability). And you have to be able to carry what you’ve learned into new contexts (transferability).
Lots of professional development in AI focuses heavily on the first one, pointing teachers towards obvious platforms and demonstrating their basic capabilities. Accessibility is more often than not a barrier, usually represented by institutional licences or approved tool lists. Very little addresses usability in any meaningful way, and almost none explicitly supports transferability.
If we want educators to move beyond a limited mental model of AI, we need to attend to all five. The next post is an open call: I want to hear what you’ve discovered.
This is the fourth in a series of posts exploring the IYKYK concept. Subscribe to the mailing list for updates.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply