Resistance Training Toolkit: Expertise

This is the first in a series of posts where I dig into each of the five points of resistance I introduced in Resistance as a Framework for Combating Cognitive Offload. I’m starting with expertise, because expertise is – perhaps paradoxically – both the foundation and the goal. While learners by definition are not experts, they still require some expertise in order to learn with AI.

In the original framework post, I described resistance as something like strength training for the brain: deliberate, designed friction that builds cognitive capacity rather than eroding it. Expertise, then, is the first muscle group we need to work on. It’s probably also the squat rack of the resistance framework. Nobody wants to do it. Everyone would rather skip leg day and run ahead to the flashier exercises. But without it, everything else collapses under load.

In this post, I’ll go deeper into what expertise means, why it’s central to the AI-in-education conversation, and what it looks like to design for expertise as resistance in the classroom. I’ll also share three activities from the AI Resistance Training Toolkit, a resource I’m developing with schools for 2026 and 2027, that demonstrate what building expertise before AI access can look like in practice.

Make sure you’ve read the original post before you read on!

What is expertise?

Expertise is a word that gets thrown around a lot without much precision. In everyday use, it tends to mean “knowing a lot about something.” But there are more complex and perhaps more useful definitions.

K. Anders Ericsson, whose work on deliberate practice shaped decades of expertise research, described expert performance not as the accumulation of facts but as the construction of sophisticated mental representations: internal structures that allow experts to perceive patterns, anticipate outcomes, monitor their own performance, and make rapid judgements in novel situations. In Peak: Secrets from the New Science of Expertise, Ericsson introduced this concept to a wider audience: experts build sophisticated internal structures through practice that cannot be borrowed, transferred, or downloaded. A chess grandmaster doesn’t just know more moves than a novice: they see the board differently. They recognise configurations that a beginner can’t perceive. The knowledge is organised in a qualitatively different way. Ericsson’s work was

Chi, Feltovich, and Glaser’s 1981 study also demonstrated this qualitative organisation of knowledge. Physics experts categorised problems by deep structural principles like conservation of energy and Newton’s second law. Novices sorted by surface features: inclined planes, pulleys, springs. Same problems, fundamentally different cognitive architecture. That reorganisation from surface to depth is what expertise looks like from the inside, and it only happens through sustained, effortful engagement with the material.

This is important for the AI/learning conversation, because GenAI can produce outputs that look expert without any of that underlying cognitive architecture being present in the person who prompted it. A student can generate a beautifully structured essay, a well-reasoned argument, or a functioning piece of code without developing any of the mental representations that would allow them to understand why it works, spot when it doesn’t, or adapt it to a new context.

The AI learning paradox

I first wrote about this problem in 2024, in Expertise Not Included. To get the most out of AI, you need to know enough about your subject to direct the AI meaningfully and to detect when it’s producing plausible nonsense. And as the complexity of the topic increases, more expertise is required to fact-check the output, not less.

This creates a genuine paradox for learners. To learn through AI, you need the expertise you’re trying to develop. You need to know what questions to ask. You need to recognise when the AI is hallucinating, oversimplifying, or confidently wrong. You need enough understanding to evaluate whether the AI’s response is actually useful or just fluent.

Educator Jeppe Stricker has called this the Exhaustion Problem: GenAI produces an unpredictable mix of polished rubbish and coherent information, forcing users into a relentless sorting process that demands precisely the expertise they are trying to develop. The epistemological responsibility that was once distributed across institutions, textbooks, and credentialed experts now lands squarely on the individual learner, the person least equipped to handle it.

I’ve experienced this firsthand. When I use AI to build things in code, I can push it much further in areas where I have some understanding of what the code should do, what good architecture looks like, and where the failure points tend to be. In areas where I’m a genuine novice, the dialogue with the AI quickly becomes vague and frustrating. The AI holds more information than I could ever learn, but it draws on it in ways I can’t direct, and I lack the internal reference points to evaluate what comes back. TL;DR: I don’t know how to ask the right questions.

Three dimensions of expertise

Since writing that original article, I’ve come to think of expertise as having three dimensions, each of which matters for how we think about AI in education.

Domain expertise is the most obvious: what you know about your subject. The content knowledge, the disciplinary vocabulary, the conceptual frameworks that allow you to think within a field. This is what most people mean when they say “expertise.”

Technological expertise is – in this context – the knowledge of how AI works and how to work with it. Prompt construction, understanding model limitations, knowing when to trust output and when to push back. This is a skill set in its own right, and one that’s developing rapidly.

Situated expertise is the dimension that gets overlooked, and it might be the most important. This is the contextual, relational, embodied knowledge that develops over years of practice in a specific setting. It’s the teacher who reads a room mid-lesson and adjusts. The nurse who spots something the diagnostic AI misses because they’ve seen it before and know what “not quite right” looks like. The software engineer whose gut tells them to front-load the testing.

A diagram titled 'Three Dimensions of Expertise' illustrating three key areas: 'Domain' at the top, representing subject knowledge and pattern recognition, 'Technological' at the bottom left, describing effectiveness with AI tools, and 'Situated' at the bottom right, indicating lived and contextual knowledge. A note at the bottom states that learners aren't experts but must be on track to becoming one.


In Expert Signals, I explored this third dimension through James C. Scott’s distinction between techne and metis. Techne is formal, codifiable, transferable knowledge: the kind of thing you can write in a textbook or encode in a training dataset. Metis is practical, embodied, context-specific wisdom that resists formalisation.

AI is essentially a very sophisticated form of techne. It pattern-matches across enormous bodies of codified knowledge and produces outputs that follow logical, well-structured forms. It excels at techne. It has no metis.

This means that when I talk about expertise in the resistance framework, I’m not just talking about knowing facts. I’m talking about the full architecture of understanding: the deep structural knowledge that Chi and colleagues identified, the mental representations that Ericsson described, and the situated, embodied wisdom that only develops through sustained practice in a particular context. All three dimensions are at risk when AI does the cognitive work.

Why expertise is the foundation

You cannot evaluate the quality of AI output without domain knowledge. You cannot exercise metacognitive awareness about your own learning if you haven’t done any learning. You cannot stretch your thinking beyond its current bounds if there are no bounds to stretch from. And you cannot engage meaningfully with feedback, human or AI, without the internal quality standards that expertise provides.

Beneficial offloading occurs when a learner offloads extraneous cognitive load to free up working memory for the intrinsic work of learning. Detrimental outsourcing occurs when the learner offloads the intrinsic cognitive work itself. Expertise is what allows us to tell the difference. Without it, every offloading event risks becoming outsourcing, because the learner has no internal reference point to determine whether they’re freeing up capacity for deeper thinking or simply bypassing the thinking altogether.

Patricia Alexander’s Model of Domain Learning traces a trajectory from acclimation (fragmented, surface knowledge) through competence (cohesive, principled understanding) to proficiency (deep, automated expertise). Each stage depends on the previous one. You cannot skip acclimation and expect competence to emerge. And Keith Stanovich’s Matthew Effect – a literacy spin on the proverbial “rich get richer and poor get poorer” – tells us that the cost of missing these early stages compounds over time: knowledge begets knowledge, and its absence creates a widening gap that becomes increasingly difficult to close.

When AI is introduced before sufficient expertise has developed, the risk is not just poor output in the moment: It’s that the expertise never develops at all, because the productive struggle required to build it was bypassed. The student plateaus at the acclimation stage, never building the schemas, the mental representations, or the situated understanding that would allow them to move forward. And they don’t know what they’re missing, because the skills needed to produce correct responses are the same skills needed to recognise correct responses. Less practice leads to less competence, which leads to worse ability to detect errors in AI output, which leads to more reliance on AI, which leads to even less practice.

Expertise as resistance in the classroom

So what does designing for expertise actually look like? I’ve been developing what I’m calling the AI Resistance Training Toolkit: a set of evidence-informed classroom activities designed around the five points of resistance. Each activity is built on established pedagogies and emerging evidence of AI and cognition.

Since the field of “GenAI in education” is relatively novel, much of the work on GenAI-based tutors (as opposed to earlier forms of artificial intelligence) is theoretical or the evidence thin. As such, these activities prioritise effective teaching methods that predate the current wave of chatbots, but reference studies which suggest an overlap in practice.

The expertise activities share a common design principle: they require students to build and demonstrate domain knowledge before AI tools are introduced. This means appropriate sequencing: ensuring that the cognitive work of building expertise happens first, so that when students do use AI, they bring something to the interaction rather than receiving everything from it.

Here are three draft activities from the toolkit that I think capture the range of what expertise-as-resistance can look like.

While this toolkit is in development, I’m inviting as much feedback and critique as possible so please get in touch using the comment form at the end of the article!

Flip the Expert

Built on: The teach-the-teacher effect (Tomisu, Ueda, & Yamanaka, 2025; Xing et al., 2025)

Activities where students “teach the teacher” have been a mainstay of education since at least the 1970s. AI, unfortunately, presents itself as an omniscient, all-knowing “meta-teacher” and defaults to answer-giving rather than the role of the learner being taught. This activity inverts the typical student-AI dynamic. Instead of asking the AI for help, students teach the AI.

Each student is assigned a concept from the current unit of study. They prepare a structured explanation: identifying key features, posing causal questions (“Why does this work this way?”), and drafting explanations with supporting evidence. Then they “teach” their explanation to an AI chatbot that has been instructed to act as a confused beginner, one that asks clarifying questions and expresses confusion.

The rule is simple: when the AI asks a follow-up question, students must respond from their own knowledge. They cannot ask the AI to answer its own questions. Students record every moment where they couldn’t adequately explain something, creating a gap list that the teacher uses to plan targeted instruction.

Evidence: Tomisu, Ueda, and Yamanaka (2025) theorise that positioning AI as an adaptive novice rather than an omniscient authority might shift cognitive demand back to the learner. Xing et al. (2025) demonstrated that GenAI-powered teachable agents, where students teach rather than receive, enhanced learning in middle school mathematics.

Depth Gauge

Built on: Progressive schema building (Rosenshine, 2012), worked-examples with gradual removal of instructional scaffolds (e.g., Powell et al., 2022)

Depth Gauge introduces students to a three-level framework for understanding any topic: Level 1 is surface recognition (vocabulary, basic facts), Level 2 is working knowledge (explaining relationships, applying concepts), and Level 3 is deep understanding (identifying exceptions, comparing perspectives, transferring to new contexts).

Students begin by self-assessing at Level 1, writing down what they currently recognise at a surface level. The teacher validates these self-assessments with a brief formative check and instruction of basic concepts. Over subsequent lessons, students investigate the topic through primary sources, experiments, close reading, or fieldwork, with no AI tools. After each investigation, they re-assess their depth level with evidence.

At Level 2, the teacher introduces a worked example showing expert engagement with the topic, and students compare it to their own developing understanding. When students demonstrate Level 3 understanding through an assessment task completed without AI, they may access AI tools for extension, refinement, or creative application. Finally, students annotate their depth gauge one last time, noting what AI helped them refine versus what they had to build independently.

Evidence: The evidence base here draws on Rosenshine’s principles of presenting new material in small steps with models to reduce cognitive load and build schema incrementally, and on a number of the ideas in Powell et al.’s Myths That Undermine Maths Teaching. In particular, the activity draws on the idea that students need to be able to demonstrate increasingly complex skills without AI – and to reflect on their own application of those skills – before confidently using AI in that domain.

Slow Notice

Built on: Close reading (e.g., Shanahan); structured cognitive offloading (Hong et al., 2025); one minute, three minutes, five minutes, write! (Furze, 2021).

Students select an object, text, dataset, artwork, specimen, or phenomenon connected to the current unit. They spend 10 to 15 minutes observing it closely, whatever that looks like in this domain: close reading, sketching, annotating, photographing from multiple angles, or taking detailed notes… No AI tools, no internet searches. The goal is sustained, unmediated attention.

After observing, students write a paragraph explaining the complexity they noticed: “This is more complex than it first appears because…” They must identify at least three layers, components, or interactions they observed. They generate genuine questions, things they noticed that they cannot yet explain.

Then the teacher provides an AI-generated description or analysis of the same subject. Students compare it against their own observations, marking what the AI captured, what it missed, and what they noticed that the AI could not: sensory details, contextual nuances, situated knowledge. Finally, students write a brief reflection: “What do I understand about this subject that the AI does not, and why?”

Evidence: Shanahan and others provide excellent resources on close reading in literature and English, but these skills extend beyond these disciplines. Furze (2021, 2022) identified that providing explicit structure around close reading exercises removes some of the ambiguity of annotation and analysis tasks. Hong et al. (2025) demonstrated that when students retain higher-order cognitive work like observation, critique, and reflection while offloading lower-order tasks, AI can enhance rather than diminish critical thinking.

Designing for expertise, not against AI

Again, these are works in progress. Like everything in education, there is no easy solution to AI and expertise. If you have contrary research, want to critique or question these activities, or just have suggestions, then please get in touch via the form at the end of this article.

Each of these activities ends with students using AI, or at least engaging with AI-generated output. The point, which both AI and non-AI research is consistent on, is the sequencing: when to introduce new concepts, new material, new challenges, and new technologies.

Sequencing also offers some nuance to the endless back-and-forth debate between explicit instruction and student-led instruction. In my experience this dichotomy has done more harm than good, and often degrades into name-calling (“explicit instruction is a cult”, “student-led learning is negligent”) and unhelpful public narrative. The Resistance Framework focuses on cognitive offload, which tends to be a topic that attracts this kind of divisive discourse. You will see evidence from both sides of the conversation throughout this series, since both approaches have a place in education.

For example, Schwartz and Martin’s 2010 work on “inventing to prepare for future learning” showed that students who struggled with problems before receiving instruction demonstrated dramatically better transfer to new contexts. Schwartz and Bransford’s earlier research established that direct instruction is most effective after struggle: there is an optimal moment for telling, and AI tells immediately, before the learner is ready to benefit.

These activities help create that optimal moment. They build the expertise that makes AI use productive rather than harmful. They ensure that when students interact with AI, they bring internal reference points: their own knowledge maps, their own depth of understanding, their own first-hand observations. The AI becomes a technology for going further, not a substitute for getting started.

This is what I mean by expertise as resistance: not resistance against AI, but resistance against the erosion of the cognitive work that builds understanding. Our job as educators is to protect that mechanism, deliberately and by design, so that when students do encounter AI, they have something worth extending.

In the next post in this series, I’ll turn to the second point of resistance: evaluation. If expertise is the foundation, evaluation is one of the most important things you do with it.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨

Leave a Reply