Resistance as a Framework for Combating Cognitive Offload

I hit a point in my late 30s where as a father of three with a relatively sedentary job, I could no longer perform simple tasks like standing, getting out of bed, or bending over to pick something up without making this noise…

The dad grunt will be a common feature to many people at my age and stage. But I refuse to be laid low by the ever-present risk of a debilitating lower back injury. That’s why I started going to the gym, and consciously working on things that would reduce the risk of me ending up on the floor.

Resistance training — like strength training with weights — is now, like it or not, a fixture in my life. And in much the same way I think we need to create deliberate moments of resistance to AI, lest our brains end up on the floor, unable to move and wondering what we did to deserve this.

Brain Gym

Cognitive atrophy, brain rot, offloading. There are plenty of terms being kicked around for what AI might do to our brains. There’s a growing body of research, some of which I’ll discuss in this article, that suggests that AI contributes to the degradation of learning, a loss of memory, or the total inability to retain knowledge.

I’m far from convinced that using AI inevitably or unanimously leads to cognitive decline. We’ve had these conversations in education and technology before. From screen time to video content, gaming to ed tech. And the conclusions are often the same: it’s not what the technology does to you, but how you use the technology that ultimately determines how you think and learn with it.

But I can say, based on my own personal experience and research from K–12, higher education and industry, that generative artificial intelligence in the form of “helpful assistants” like Copilot, ChatGPT, Gemini, and Claude can absolutely increase the kind of cognitive laziness which academics and educators are worried about.

I’m a practical person, and while I find the theory interesting, I’d rather start getting to grips as soon as possible with what resisting cognitive offload looks like in the classroom. In this article, I’m going to present a framework based around the concept of resistance, drawing on recent post-ChatGPT research, as well as long-standing discussions of cognition and metacognition. I’ll also be testing this framework at the coalface through a number of schools that I’m working with in 2026 and 2027.

I had the draft of this post sitting in WordPress when I saw an article shared by Simon Buckingham Shum. Vendrell and Johnston’s ‘Scaffolding critical thinking with generative AI‘ is a deeply researched, peer-reviewed framework for integrating LLMs in higher education in ways that preserve higher-order thinking. It overlaps with, and in several places thoroughly explains, some of the areas in my framework. Their work identifies six processes underpinning critical engagement and translates them into eight design principles, including preserving cognitive friction, a concept closely aligned with what I’m calling resistance. It’s well worth a read.

Why Resistance?

I’ve written about resistance in AI a few times in the past, though mostly in a slightly different context — the context of resistance as a political act, resistance to big tech, resistance to the economic drivers of AI pushing the technology increasingly into every corner of education.

What I’m talking about here is a slightly different kind of resistance. It’s more like the resistance you’ll find in the gym. Resistance which is designed to put friction in place, or to counteract our body’s natural tendencies to dissolve over time. It’s resistance designed to build strength, not resistance designed to enact political change — although maybe the two things are connected.

I’m offering resistance as a metaphor because something needs to stand in the way of AI doing all the work. Studies of AI and cognition dominate academic discourse but also grab public attention, like the widely shared MIT article suggesting that ChatGPT turns your brain into soup. A more conscientious reading of that article reveals the truth of the findings: the somewhat obvious statement that if you use AI to entirely complete a task from end to end, you leave with very little recollection of the task or the work created.

That statement is basically common sense. If I had to complete a piece of mathematics homework and I asked the person sitting next to me in class to do it, of course I wouldn’t remember the process of crunching those numbers. If I hired someone via a contract cheating website to write my essay for me and my only input was handing over my credit card details, you wouldn’t expect me to remember what I hadn’t written.

The takeaway from articles like this should not be “offloading tasks entirely onto AI leads to cognitive decline.” We should instead be discussing what resistance we need to put in place to ensure that some learning does happen along the way.

Cognitive Offloading, Atrophy and Metacognitive Laziness

Rather than trying to cram the last few years of research on cognitive offload and AI into this blog post, I’m going to point you towards a recent publication from Jason Lodge and Leslie Loble through University of Technology Sydney, which does a far better job.

Lodge and Loble’s report features some of the most current research on topics like GenAI and cognitive offloading, such as the Bastani et al. paper from 2025, which looked at over 1,000 students and asked thoughtful questions about what happens when we offload too much of our responsibility for thought onto a chatbot.

Loble and Lodge argue that there is a distinction, though, between uses of AI which result in beneficial cognitive offloading and those which result in what they “outsourcing”. The difference is defined as follows:

Beneficial offloading occurs when a learner offloads extraneous load to free up limited working memory resources for the intrinsic work of learning… Detrimental cognitive offloading, or outsourcing, occurs when a learner offloads the intrinsic cognitive work itself.

So, we are not looking to treat the use of GenAI as an inherently negative form of cognitive offload, but we do need to recognise that outsourcing the “intrinsic cognitive work” will get in the way of learning.

A diagram titled 'A Framework for Resistance' outlining five key areas for learning with AI, including Metacognition, Expertise, Feedback, Evaluation, and Stretch, with a note on the nature of resistance in learning.
This framework and the diagrams in this article are released by Leon Furze under a CC BY 4.0 license.

A Framework for Resistance

I”ll break the concept of resistance into five key areas, where students can self-regulate their use of GenAI, or educators can put in place deliberate friction to encourage learning to happen. I say encourage and not make, because honestly any student who really wants to can bypass all of these points of friction and use GenAI — or other old-fashioned methods like contract cheating — to create the illusion of learning.

This is classic circle of influence and circle of control territory. We can’t control what every single student is going to do with the approaches that we offer them. Students sometimes aren’t even fully in control of the extenuating factors that lead them to cheat or misuse GenAI. The best we can do is consciously create situations where most students, most of the time, are able to learn.

A diagram titled 'Circle of Influence & Circle of Control', illustrating the concepts of design in education. The innermost circle, 'Circle of Control', describes designing learning situations, while the outer 'Circle of Influence' refers to factors affecting student choices. Labels indicate individual student motivation and external pressures, with a note emphasizing that only learning conditions can be created.

Expertise

There is a paradox at the heart of AI use in education: in order to use it well, you need expertise in the area that you are using it. And by definition, learners don’t have that expertise yet.

I first wrote about this problem in 2024, in an article called Expertise Not Included. The argument was straightforward: because large language models are so broad and so general, you really need to know what you want before you can make the most of them. You need to know enough about your subject to spot when the AI is hallucinating or producing plausible nonsense. And as the complexity of the topic increases, it becomes harder and harder to spot those inaccuracies, meaning more expertise is required, not less.

Jeppe Stricker, writing from a higher education perspective, has described this as The Exhaustion Problem. He argues that GenAI produces an unpredictable mixture of polished rubbish and coherent information, forcing users into a relentless sorting process that requires precisely the expertise they are trying to develop. What we’re witnessing, Stricker argues, is a fundamental transfer of epistemological responsibility from institutions and teachers onto individual learners who are least equipped to handle it.

Expertise isn’t just about producing correct outputs. It’s about developing the internal representations: mental models, pattern recognition, and the intuitive judgements that allow an expert to navigate novel situations. A medical student who uses AI to generate differential diagnoses without learning to reason through symptoms hasn’t become a better diagnostician; they’ve become someone who knows how to use an app. A law student who generates legal arguments without learning to read cases critically hasn’t developed legal reasoning. They’ve developed prompting skills.

The traditional educational model, for all its flaws, embedded quality assurance into the knowledge transmission process. Textbooks were peer reviewed, lectures were delivered by credentialed experts, and library collections were curated by professionals. Students could reasonably trust that the materials placed before them met some baseline standard of reliability. AI blows that up. The mental energy that should be powering intellectual development gets consumed by verification work that yields no educational benefit even when done successfully.

Finally, expertise isn’t just one thing. I have proposed a three-dimensional model: domain expertise (what you know about your subject), technological expertise (how you work with AI), and situated expertise (the contextual, relational knowledge gained through years of practice in a specific setting). That third dimension is often overlooked. A person can be highly skilled in their discipline and highly skilled in using AI, but if they are unable to contextualise and share that knowledge, it becomes a limited kind of understanding. Situated expertise is the lived, embodied knowledge that comes from years on the job: reading a room, understanding when a student is struggling before they say so, knowing which approach to take not because a textbook says so but because you’ve been here before.

Diagram illustrating the three dimensions of expertise necessary for effective AI use: Domain, Technological, and Situated knowledge.

The first facet of resistance, then, is that disciplinary, technological, and situated expertise are fundamental to avoiding harmful cognitive offload. Learners are not experts, but they are, we would hope, on track to gaining expertise. The role of education is to protect and nurture that trajectory. When AI is introduced before sufficient expertise has developed, the risk is not just poor output. It’s that the expertise never develops at all, because the productive struggle required to build it was bypassed.

Evaluation

If expertise is the foundation, evaluation is one of the most important things you do with it.

Bearman, Tai, Dawson, Boud, and Ajjawi made the case in a 2024 paper: AI has widened the gap between our capability to produce work and our capability to evaluate the quality of that work. The concept they centre on is “evaluative judgement”, defined as the capability to judge the quality of work of self and others. It has two components:

Firstly, a person must hold an internal understanding of what constitutes quality, and secondly they must make a judgement about work – whether it be theirs or someone else’s.

Developing evaluative judgement for a time of generative artificial intelligence

Expert evaluative judgement is often tacit and holistic. A novice evaluates analytically, working through criteria one by one. An expert recognises quality as a kind of gestalt, difficult to articulate but unmistakable when present.

In GenAI outputs are designed, through various methods including RLHF, to look good. They are fluent, confident, well-structured, and often sycophantically pleasing. Lodge and Loble describe this as “fluency on demand” and identify it as a core driver of the “illusion of competence”: learners mistake the ease of processing for the depth of learning. Evaluative judgement is precisely the capacity to see past that surface. To ask whether the argument holds up, whether the evidence is real, whether the response is too generic or drawing on outdated assumptions.

Text describing the evaluation of AI output, including characteristics of fluent and confident writing versus evaluative questions about argument validity, evidence authenticity, and content relevance.

Bearman and colleagues propose three intersections between evaluative judgement and AI that are useful for thinking about resistance. The first is developing judgement of AI outputs: can the student tell whether what the AI produced is any good? The second is developing judgement of AI processes: does the student know whether their approach to working with AI was effective? The third is using AI itself to develop evaluative judgement: asking the AI not just “is my work good?” but “have I accurately appraised the quality of my work?” This creates a recursive loop where the student is constantly calibrating their own judgement against multiple sources.

That recursive quality is what makes evaluation a powerful site for resistance. Even if a student generates work entirely with AI, the process of evaluation creates an additional point of contact between the student and the work. I’ve experienced this directly. I have generated code in languages I am wholly unfamiliar with, end-to-end, using tools like Claude Code. Upon reviewing that code I’ve learned more in a shorter time than had I tried to start from scratch. The evaluation was the learning. Not the generation.

But evaluation, like expertise, needs to be designed for, not left as an afterthought. Bearman and colleagues are explicit: the process of developing evaluative judgement needs to be deliberate and be deliberated upon. And there is an important connection back to expertise. Evaluative judgement is always contextualised. You cannot evaluate a philosophy essay without understanding what good philosophical argument looks like. You cannot evaluate AI-generated code without understanding what the code is supposed to do. The less domain knowledge and situated expertise a student has, the less capable their evaluation will be, and the more likely they are to accept plausible but flawed output uncritically.

Evaluation depends on expertise, but it also builds it. Every act of genuine evaluation forces the student to engage with the substance of the work. It interrupts the passive acceptance that Lodge and Loble identify as the default path of unstructured AI use. It creates resistance.

Metacognition

Metacognition is the capacity to think about your own thinking. For a student devolving too much of the responsibility of learning to AI, metacognition cannot happen. You can’t think about what you’re thinking if you’re not doing the thinking.

Fan and colleagues coined a term for this in 2024: “metacognitive laziness.” In a randomised study comparing learners using AI, human experts, and other tools, they found that the convenience of AI undermines engagement in the self-regulated learning processes that metacognition depends on: planning, monitoring, and revision. The learner effectively hands over their metacognitive responsibilities to the tool.

Engaging in self-regulated learning creates a cognitive load in itself: it takes effort to monitor your own thinking. Students, driven by a rational desire for efficiency, choose to bypass that immediate cost by offloading to AI, and in doing so miss out on the long-term benefit of developing as self-directed learners. Lodge and Loble’s report frames this as a vicious cycle: the fluency of AI creates an illusion of competence, the illusion triggers metacognitive laziness, the laziness leads to more outsourcing, and the outsourcing erodes the student’s actual knowledge, making them more dependent on the tool and less able to judge its output in the future.

But what does metacognition look like if the learner is sharing the responsibility of thought with an algorithm? Coming back to the overarching metaphor of resistance, the gym might prove useful here.

You can have a meticulously crafted, personal-trainer-endorsed program which demonstrates exactly how to get from point A (lying on the floor clutching your lower back in agony) to point B (world’s strongest dad™). You can follow that plan to the letter: turning up, ticking boxes, and gobbling down protein according to your improved diet plan. But at some point the wheels will come off. And often this is not because of poor training program design. It’s because of not listening to your body.

In the same way, metacognition with GenAI requires listening to your mind. We know what learning feels like. It’s hard. Our brains hate it. Louise David, Eliana Vassena and Erik Bijleveld wrote an interesting meta-analysis of “the unpleasantness of thinking”: humans generally find the mental effort associated with learning and thinking to be a negative experience. That’s why GenAI is so seductive and so successful. We love shortcuts. And the technology now exists which provides an excellent proxy for learning.

Infographic titled 'Spotting the Signs' discussing metacognition, featuring three sections: 'The misleading cue' highlighting common misconceptions about understanding, 'Warning signs' listing symptoms like 'Zoom-like fatigue' and 'Low self-esteem', and 'The response' suggesting actions like stepping away from the screen and reflecting.

Much like we could blindly follow a training plan and run our bodies into the ground, we can blindly allow AI to plot out the structure of our essays, our projects, our reports. Eventually we will hit points where our mind can’t take it anymore, and we will receive signals. Using AI extensively is tiring: it feels something like Zoom fatigue, something like having too many tabs open in your brain at once. Using AI can also be boring, with the attendant feelings of listlessness and even low self-esteem. These are metacognitive warnings.

Metacognition is not just thinking about thinking: planning strategies, setting goals, reflecting on your learning process. It’s about registering the signals that your brain is not quite working the way it’s supposed to. The feeling of competence that AI induces, what Lodge and Loble call the illusion of competence, acts as a misleading cue. It tells the learner that deep engagement is no longer necessary. Metacognition is the capacity to override that cue, to recognise that ease is not the same as understanding.

Metacognition as resistance is about knowing when to stop and when to pile on weights. Knowing when to dial down or ramp up the use of technology. Knowing when to step away from the computer and pick up a pen. The research on metacognition and AI suggests this can be taught: Xu and colleagues found that integrating “metacognitive supports” into AI environments enhanced self-regulated learning, and Singh and colleagues found that prompts designed to make users pause, reflect, and assess their understanding led to deeper inquiry. The key finding across these studies is that the support structures had to be integrated and non-optional. Simply suggesting that students reflect wasn’t enough. The metacognitive pause had to be built into the process, because left to their own devices, learners will take the path of least resistance.

Stretch

The first three areas of resistance are, in a sense, defensive. They’re about maintaining the integrity of learning when AI threatens to do the work for you. Cognitive stretch is different. It’s the point where resistance becomes productive, where AI genuinely takes you further than you could go alone.

The concept draws on a long-standing idea in philosophy of mind: that cognition doesn’t stop at the skull. Andy Clark and David Chalmers argued in 1998 that the tools we use to think can become functional parts of our cognitive systems. Digital notebooks, calculators, and GPS maps aren’t just aids: they change what we’re capable of thinking about. AI is the most powerful version of this we’ve ever had access to.

Clark updated this argument in 2025, noting that what emerges from good human-AI collaboration is not simple offloading but new hybrid systems where each part adapts to what the rest offers. He points to the example of Go players whose own creativity measurably increased after encountering AI strategies, not by copying the machine but by using its alien perspective to see past their own blind spots. That’s what I’m calling a cognitive stretch.

But there is always a catch: you can only stretch from somewhere. A stretch requires a foundation. If you don’t have the disciplinary knowledge, the vocabulary, the conceptual frameworks to direct AI into new territory, you plateau. You end up asking generic questions and getting generic answers. The more you know, the more you can stretch, hence expertise as a key site of resistance.

I’ve experienced this directly. When I use AI to work with code, I can stretch far beyond what I could write from scratch, because I have just enough understanding of what the code is supposed to do, what good architecture looks like, and where the likely failure points are. Someone with no programming knowledge at all could generate the same code, but they wouldn’t know what to do when it breaks, and they wouldn’t be able to direct it toward anything that isn’t already a well-trodden path.

A diagram illustrating 'Cognitive Stretch', divided into two sections: 'Foundation', which includes expertise, evaluation, and metacognition, and 'Stretch', defined as extending cognition beyond individual capabilities.

Coming back to the gym: cognitive stretch is the equivalent of adding weight to the bar once you’ve built the foundational strength to handle it. It’s not something you do on day one. It’s something you earn through the development of expertise, evaluation, and metacognitive awareness. Without those foundations, what looks like stretch is actually outsourcing. With them, AI becomes a genuine tool for going further.

Feedback

A final point of resistance is feedback: whether students using AI to get feedback on their own work, or educators facilitating feedback through AI tools.

Corbin, Tai, and Flenady argued in a 2025 paper that effective feedback between humans is “recognitive”: grounded in mutual recognition of shared vulnerability and agency between teacher and student. Both parties are vulnerable to the judgement of the other. The student explicitly, because it is their work being evaluated. The teacher implicitly, because their capacity to exercise good judgement is under scrutiny. This mutual vulnerability is, according to the authors, the condition of trusting relationships, and trusting relationships are what make feedback work.

GenAI feedback, the authors argue, is extra-recognitive. It may provide accurate, personalised information, but it operates outside this relational framework. A chatbot might congratulate a student on their work, but that congratulation lacks the recognitive significance of another human being who genuinely shares the experience of working hard and expanding one’s knowledge. The authors propose that extra-recognitive feedback can function as a pedagogical sandbox: a low-stakes environment where students build confidence before engaging in genuinely recognitive feedback with peers and teachers.

But even within that sandbox, not all students engage with AI feedback equally. Zhan, Boud, Dawson, and Yan (2025) propose an ecological framework for understanding feedback engagement with GenAI, and their key insight is that GenAI’s affordances only translate into genuine learning when they align with a student’s feedback literacy. Their contrasting examples are instructive. A student with low feedback literacy submits work to ChatGPT with a vague prompt, receives generic information, and either passively accepts it or disengages. A student with high feedback literacy writes specific prompts against assessment criteria, critically judges the quality of the response, cross-references with other sources, monitors their own revisions, and maintains awareness of academic integrity. The authors argue that feedback literacy in a GenAI context requires new competencies: knowing how to write effective prompts, exercising evaluative judgement on AI outputs, self-regulating the revision process, and understanding when AI use is appropriate and when it is not.

Diagrams comparing AI feedback and human feedback, highlighting their respective characteristics and considerations.

I think the extra-recognitive framework will be tested further as these technologies develop. We will increasingly see students forming something like a recognitive relationship with GenAI. If you think this is in the realms of science fiction, spend some time on the Reddit threads dedicated to people discussing their relationships with chatbots, including Character.AI and Replika, but also ChatGPT. Whether we like it or not, people are simulating relationships with AI in increasing volume. The mutual trust and respect between a human educator and a student could probably be convincingly approximated, even if Corbin and colleagues are right that it cannot be genuinely replicated.

Either way, feedback remains an important site for resistance. Students need to understand that GenAI feedback is capable and increasingly accurate, but also sycophantic and prone to hallucination. Some of these issues will likely be solved, so we cannot build our pedagogy around the assumption that AI feedback will always be flawed. The real resistance comes from students’ feedback-seeking behaviours as a whole. Do they know when to approach a GenAI model versus when to approach a teacher? Do they value the expertise of the teacher, and if so, how does that differ from the expertise offered by the chatbot? Corbin and colleagues suggest that the optimal integration of GenAI feedback may actually enhance human feedback relationships, by offloading extra-recognitive tasks to AI and freeing educators to invest in the recognitive relationships that support deep learning. That’s an optimistic vision, but it depends on educators making conscious choices about what feedback is for, not just how much of it can be produced.


Conclusion

These five areas of resistance are not a checklist: they overlap, depend on each other, and will look different in every classroom and every discipline. Expertise underpins evaluation. Evaluation requires metacognition. Cognitive stretch depends on all three. Feedback runs through everything. The framework is deliberately interconnected because learning is interconnected, and any attempt to isolate one area from the others will miss the point.

Coming back to the gym one last time: nobody walks in on day one and loads up the bar with their target weight. You build foundations. You learn form. You listen to your body. You add weight gradually, and only when you’ve earned the right to carry it. Resistance training works because the resistance is calibrated to where you are, not where you want to be: you physically can’t argue with the logic. The same is true here. A Year 7 student and a postgraduate researcher need different kinds of resistance, applied at different points, with different levels of AI involvement. The framework is not prescriptive about what that looks like. It’s prescriptive about the principle: that resistance has to be there, that it has to be deliberate, and that it has to be designed for rather than hoped for.

What this framework does not do is tell you AI is “bad”. It tells you that AI without resistance is bad for learning. Lodge and Loble’s beneficial offloading, Clark’s cognitive extension, Bearman and colleagues’ recursive evaluative loops: these are all descriptions of AI use that makes learners better, not worse. But they all require something from the learner. They all require effort, knowledge, judgement, or self-awareness. They all require resistance.

Over the next few months, I’ll be testing these ideas with the people they really impact: learners. Through a number of schools I’m working with in 2026 and 2027, we’ll be exploring what resistance looks like in practice, what works, what doesn’t, and what students themselves think about the balance between AI assistance and genuine learning. Stay tuned.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨

Leave a Reply