Gradually Reclaiming Responsibility

Recently I wrote about resistance as a framework for avoiding cognitive offload when working with GenAI. My argument was that resistance comes in many forms: from the ability to self-regulate and evaluate the output of GenAI, through to metacognitive “thinking about thinking” and recognising the signals from your mind and body that you have become over-reliant on the technology.

But what does resistance look like in practice? And how does the concept fit with our existing models of learning? In this article I am going to focus on one well-known and widely used model: the gradual release of responsibility.

I Do, We Do, You Do

In Pearson and Gallagher’s (1983) gradual release of responsibility (GRR) model, students first observe an educator working through a problem or demonstrating a method, before working through it alongside the educator, then with a peer, and then alone. The model is sometimes called “I do, we do, you do it together, you do it alone.” The most commonly understood version in K-12 comes from Fisher and Frey (2013) and looks like this:

A diagram illustrating the relationship between teacher responsibility and student responsibility in learning. The triangular structure shows different types of instruction: 'Focused Instruction' at the top, followed by 'Guided Instruction', with 'Collaborative Learning' and 'Independent Learning' at the base. Accompanying phrases represent varying levels of responsibility: 'I do it.', 'We do it.', 'You do it together.', and 'You do it alone.'
Adapted from Fisher & Frey (2013) – Better Learning Through Structured Teaching 3ed

It is a model of teaching and learning that reflects the logic of many classrooms; from mathematics instruction and learning new formulae and techniques, to the modelling of paragraphs and sentence structures in English, and the conducting of practical work in subjects from science through to physical education and design technologies. It has been adapted for age and stage and underpins aspects of many curricula around the world, including here in Australia.

But GenAI complicates the gradual release of responsibility model. As students gain more and more access to these technologies, even from a young age, there is the very real risk that the responsibility is offloaded – released not onto the students, but onto the chatbot. Resistance offers a way to combat this.

Reclaiming Responsibility

The more a student uses GenAI, the more effort they have to put in to maintain responsibility for their work. If a student uses AI only to plan or brainstorm – to kick ideas around with a chatbot or conduct early research – then they still retain a lot of responsibility for the final task. But what about when a student is permitted, or even encouraged, to use AI extensively throughout the learning process?

As the teacher relinquishes responsibility, moving from modelling and demonstrations through to supervision and ultimately independent learning on behalf of the students, how do we know whether learning is actually happening, or whether the responsibility has entirely shifted onto the machine?

In short, I don’t think we can really know that in every situation. I don’t think we’ve ever been able to entirely verify learning, certainly not through proxies like written essays and examinations. But if we think of the GRR alongside the concept of resistance, then we can start to have conversations with students that encourage them to push back against the compelling release of responsibility onto AI.

The diagram below blends the gradual release of responsibility model with the concept of resistance, mapped against increasing AI use:

A diagram illustrating the relationship between AI use and resistance, featuring a triangular graph. The left side represents 'AI Use' with five levels of engagement, ranging from 'I do it alone' to 'I explore with AI.' The right side denotes 'Resistance' with five levels, from 'None needed' to 'Maximum resistance.' The message highlights that increased AI use requires greater effort to maintain critical thinking.

At the top, when a student is working without AI, there is no need for resistance. The GRR model operates as it always has: the teacher models, guides, collaborates, and ultimately the student works alone. Responsibility transfers from teacher to student, and the student’s independent work is, at least in theory, evidence that learning has happened.

But as AI use increases, so too does the risk that responsibility is released not onto the student, but onto the technology. This is where resistance becomes essential. The more a student relies on AI throughout the learning process, the more deliberate effort they need to put in to maintain ownership of the thinking.

At lower levels of AI use, where a student might bounce ideas around with a chatbot or use it for early planning and research, the resistance required is relatively modest. There should be some critical evaluation of the quality of the ideas, some testing of perspectives, some checking for bias. The student still retains a great deal of responsibility for the final product. But even here, the five dimensions of resistance from the earlier framework apply: the student needs enough subject expertise to judge whether the AI’s suggestions are any good, enough evaluative judgement to distinguish between a useful idea and a plausible one, and enough metacognitive awareness to notice when they have stopped thinking and started accepting.

As AI use deepens into collaboration, where the technology is involved in drafting, reviewing, editing, and providing feedback, the resistance required increases. Feedback is a core dimension of resistance and should be a site of friction, not passive acceptance of the AI’s advice. The sycophantic tendencies of GenAI platforms, the assumptions baked into the algorithm, the lack of human nuance and recognition: all of these should be points of pushback. Evaluation becomes critical here, because the gap between the student’s ability to produce work and their ability to judge its quality is at its widest when AI is doing a substantial share of the production.

A visual guide on evaluating AI output, comparing fluent, confident, well-structured responses against evaluative judgments that question the validity and depth of arguments.
Diagram from Resistance as a Framework

At the highest levels of AI use, where a student might be using GenAI to produce an entire artefact, the resistance required is at its maximum, and it needs to come in a variety of forms and places. Consider a student in a computer science class using AI to generate an entire application: resistance comes from applying expertise throughout the process, checking the legitimacy of code, making architectural decisions that the AI cannot contextualise. In English or media, a student could feasibly use multi-modal AI to create an entire piece of media: a persuasive advertisement, a short film, a narrative animation. In that case, the resistance comes from evaluation, from the application of disciplinary metalanguage to guide and refine the output, from metacognitive awareness of what has been learned along the way, and from the cognitive stretch of pushing the AI into territory that a generic prompt would never reach.

The core insight of the diagram is simple: in the original GRR, responsibility flows from teacher to student. In an AI-augmented version, there is a third party in the room, and it is perfectly happy to take on all of the responsibility without ever intending to give it back.

Resistance is how the student reclaims it.

How Do We Know If Students Are Learning?

“How do I know my students are learning?” is now one of the most commonly asked questions when I speak to educators in both K–12 and higher education. I suppose on one level an appropriate answer might be: how have you ever known that they are learning? For a brief period in history, we’ve used artefacts like essays, reports and examinations as evidence of learning. But in many ways, they were always unsatisfactory.

In the GRR model, when students reach the “I do it alone” stage, the implication is that we can see that students have learned by their ability to demonstrate in practice what they have seen demonstrated in theory. The same could be true with artificial intelligence. To use an example from earlier: if the point of the English or media exercise to create an animated narrative is to demonstrate the student’s understanding of narrative structure, characterisation and plot, then the technology they use to create the product is less important than their ability to articulate the process. While the process can be handed off to AI – the responsibility for creating the product itself handed over to a multi-modal GenAI application – the ability to articulate the process, the design choices and evaluate the outcome cannot.

Maybe then, resistance is what we are assessing. Maybe resistance is learning. And in the face of technologies which make it easier than ever to simulate the responsibility of learning, we need to work together with students to make it possible for them to really show us what they know and what they can do.

Leave a Reply