Resist, Refuse, or Rationalise – Just don’t Roll Over

Towards the end of 2024 there was a noticeable uptick in AI critique from educators and academics. It seemed as though the initial shock of ChatGPT had finally worn off, and people were beginning to grapple with the bigger-picture concerns of GenAI beyond “it’s that thing that students are using to cheat”. Of course, there is still a lot of the cheating narrative out there – just search online for “AI detection tools” (even though they don’t actually work). But in November and December last year, the GenAI resistance started to really take shape.

GenAI is a divisive technology, and since the release of OpenAI’s notorious chatbot we’ve seen the full spectrum of “I hate this and I will never use it” to “this is amazing and I will use it for everything”. I don’t know if it’s the algorithm feeding my preferences, but the volume of anti-AI sentiment in education definitely seems to have grown, and with good reason. Over 2024, we saw numerous reports of AI’s impact on the environment, the human labour costs of training and classification, and the underhanded and often practically (if not actually) illegal practices of developers.

We have also seen how the major players – Google, OpenAI, Microsoft, Meta, Amazon, Apple – have deliberately targeted both businesses and schools with technology that can, at best, be described as undercooked. You see, GenAI doesn’t actually work that well for many of the purposes it has been proposed for, and no matter what the tech companies and the LinkedIn AI acolytes tell us, it has not revolutionised education.

But the growing discord in education circles has also led to some bold and sometimes incorrect claims on the side of the resisters. Claims that the evils GenAI should be avoided at all costs, erroneous accounts of the water and energy consumption of ChatGPT prompts, and moral panic over the death of reading, education, and perhaps even the entire planet have proliferated across social media.

Over the Christmas holidays, I found myself in an awkward position: trying to figure out ways to help educators understand and perhaps use AI, whilst being increasingly sucked into the black hole of critique and negativity that was filling my feed. I was planning 2025’s professional learning and the upcoming work with my school and university clients, while at the same time wondering if I could really justify using AI at all. In the terms of my late-2024 posts, I was slipping off the fence.

Then, I took a break from AI to re-focus on my PhD studies. I worked a little more on the thesis, re-read some articles, and did some “real writing”. Of all things, Foucault pulled me out of my existential GenAI crisis.

This article is a response to the ongoing tensions between resistance and rationalisation: between the educators banishing GenAI from their classrooms, and the ones trying to find ways to coexist with the technology, or even adopt it fully. It is also unapologetically filled with block quotes from everybody’s favourite French philosopher.

A Plurality of Resistances

“Where there is power, there is resistance, and yet, or rather consequently, this resistance is never in a position of exteriority in relation to power… Hence there is no single locus of great Refusal… Instead there is a plurality of resistances, each of them a special case: resistances that are possible, necessary, improbable; others that are spontaneous, savage, solitary, concerted, rampant, or violent; still others that are quick to compromise, interested, or sacrificial; by definition, they can only exist in the strategic field of power relations.”

Foucault (2008). The History of Sexuality: Vol 1. p. 94

First of all, not all rejection of AI is created equal. Some educators have been experimenting with language models, image generation, and code completion tools since before ChatGPT was even released, and have started to take an ethical stance as the technology has grown in energy consumption and corporate power. Others have never used ChatGPT on the premise that it’s “not for them”. Along the spectrum of retired power users to always-avoiders, are the “possible, necessary, improbable…spontaneous, savage, solitary, concerted, rampant, or violent” resisters.

But many of these resistances all have something in common: they are not a resistance to the technology, they are a resistance to power. These resistors do not object to AI per se, they reject the imposition of technological, corporate control into education that is represented by GenAI. Take University of Edinburgh’s Ben Williamson, an academic and educator who writes prolifically about the risks of edtech and the corporatisation of education. In his recent critique of “turbocharged” AI policies in UK education, Williamson urges caution over delegating too much of the decision-making in education to tech companies:

The risks of rushing out AI snake oil into schools are very real. Yet in the English schools sector there is now a very powerful network of fast policy actors seeking cyberdelegated authority to turbocharge technology testing of AI solutions. They are already prototyping tools and publishing use cases, specifying the benefits of AI for teachers, and awarding funds to the edtech industry to build and test new products.
https://codeactsineducation.wordpress.com/2025/01/17/piloting-turbocharged-fast-ai-policy-experiments-in-education/

This is not an uninformed, “Luddite” response to technology. It mirrors what I suspect is the experience of many classroom teachers, who have been frustrated over the years with the frequent and sometimes aggressive incursion of tech into the classroom. In my fifteen years in education, I saw Learning Management Systems come and go, edtech platforms that promised the world and delivered nothing but a hefty fee, and, of course, remote learning, and the opportunistic bloodbath of tech companies who suddenly had carte blanche to enter the school system. GenAI doesn’t need to be edtech 2.0.

Working with Intransigence

“Working with a government doesn’t imply either a subjection or a blanket acceptance. One can work and be intransigent at the same time. I would even say that the two things go together.”

Foucault (2020). Power, p. 456

Whether its government policy or edtech advancing into education (politics and technology are inseparable anyway – just look at the US), resistance doesn’t have to mean outright refusal. We can work with technology and stay committed to our values. It might seem increasingly difficult, but just because a technology exists does not mean that it is inevitable.

US educator Marc Watkins made the distinction in this article, where he defined AI as “unavoidable but not inevitable”:

I think generative AI is unavoidable, not inevitable. The former speaks to the reality of our moment, while the latter addresses the hype used to market the promise of the technology—a sales pitch and little else. Faculty and students have to contend with generative technology in our world as it is now, not as it is promised to be. That should be our focus.
https://marcwatkins.substack.com/p/ai-is-unavoidable-not-inevitable

We have to contend with GenAI, but that does not mean we have to do it blindly, or with a passive, broken acceptance that the technology companies have somehow “won”.

We can do this in many ways. For my part, I’m interested in demystifying GenAI, in removing the “magical” language associated with the technology and encouraging people to treat it like software – because it is. Tech developers would have us believe that AI is a companion, a colleague, a teacher, even a lover. In the (very) near future, some people will experience AI in those ways – they already are. But that doesn’t mean that we, individually, are not entitled to our own opinions. And it doesn’t mean that we can’t share those thoughts with the people we teach.

There will come a time – sooner than we think – when it is countercultural to challenge the “humanity” of AI. When it is going against the grain to suggest that people shouldn’t form relationships with language models. That doesn’t mean you can’t be uncomfortable with that idea, or push back against it in the classroom. Be countercultural. Be intransigent. That’s what education is for.

Power-Knowledge and the Rationalisation of GenAI Use

“In short, it is not the activity of the subject of knowledge that produces a corpus of knowledge, useful or resistant to power, but power-knowledge, the processes and struggles that traverse it and of which it is made up, that determines the forms and possible domains of knowledge.”

Foucault (1995). Discipline and Punish, p. 28

And while we’re on the subject of pushing back, make sure you’re clear about what you’re pushing back against. The advance of AI in education is not driven by neutral or objective processes of knowledge creation – OpenAI’s partnership with Common Sense Media and Khan Academy isn’t an act of altruism. Microsoft and Google’s AI courses for educators are not free because they have an interest in helping teachers. Instead, these things reflect the interplay of power-knowledge—systems of control and influence shaping what is considered valid or useful knowledge. AI tools, and the policies and processes that enable them, are embedded with assumptions, priorities, and struggles that attempt to determine how educators engage with these technologies.

But nor does that mean that the technology itself is inherently useless, or harmful, or problematic. If, like me, you plan on continuing to use AI or want to explore the technology, don’t feel bad about it. Just be prepared to have some clear, rational arguments up your sleeve when faced with the outright refusers.

Jon Ippolito recently drove this home in an article about the environmental impacts of GenAI use. I often walk into conversations about AI where the argument against its use is based on the environmental impact or energy/water consumption of models like ChatGPT. It is true that GenAI models are energy-intensive, but so are other, much more widely used technologies.

Here are the comparisons Ippolito derived from various sources:

9. Comparisons can be surprising (approximate Watt-hours and liters or ccs):
🎦1000 Wh / 4 L: hour-long Zoom call with 10 people (devices 600 + transmission 200 + server-processing 200).
📺200 Wh / .8 L: hour-long video streamed on a big TV (Kamiya 2020, Carbon Trust 2021).
🪫20 Wh / 80 cc: charging a smartphone (EPA 2024).
📝6 Wh / 24 cc: generating a page with an online chatbot (Brown 2020).
🖼3 Wh / 12 cc: generating an image online (Luccioni 2024).
🔍.3 Wh / 1 cc: one non-AI Google search (Google 2009).
✏️.05 Wh / .2 cc: generating a sentence with an online chatbot (Luccioni 2024).
💻.01 Wh / .04 cc: Generating text with a local chatbot (30W x 1s).
https://ai-impact-risk.com/ai_energy_water_impact.html

Not insignificant, but are the same people swearing off AI also abandoning Netflix, Zoom, or their iPhones? Of course, all of these environmental concerns should be (and are) up for debate, but attacking another educators’ use of AI isn’t the way to make progress.

In my earlier post from the fence, I made the comment that we should never be directing our resistance or refusal of AI at other educators. Point that anger where it belongs: at the tech companies who are responsible for the development and release of the products. Point it at the powers and processes in control.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨

Leave a Reply