A few weeks ago I wrote an article explaining why I’m still on the fence with AI, and refuse to tip entirely into either refusal or full adoption. In this post, I’m pulling together some of the most read articles on this site from both sides of the fence, to add a little weight to my quite possibly precarious position.
I honestly think that balance is the key: yes, Generative AI technologies are an ethical minefield, and yes, resistance and refusal are both sound options. But on the other hand, if educators don’t learn how to contextualise this technology through their own expertise, the tech will be done to them either way. I don’t think this is defeatist, but I do believe that much of the “inevitability” narrative of AI is sadly true.
The cat is firmly out of the bag as far as student use is concerned, and efforts from Microsoft, OpenAI, Google and co. to integrate GenAI into every digital system will mean that the technology becomes unavoidable in many contexts, including education institutions.
Does that mean we should throw criticality out of the window? Of course not.
Here are a few of my articles for and against using AI in education:
Using AI in Education
First up, one of the articles I wrote in early 2023 with guidance on how educators can use AI for planning, updating units, communications, and so on. This post became the basic outline for the Practical AI Strategies book published in 2024, and has been a consistent addition to my professional learning sessions since then.
A year later, I updated that original series with six more areas, including design and differentiation:
As well as the general posts about prompting and text-based GenAI, I’ve written a bunch of articles over the past couple of years looking at specific tools and technologies including ChatGPT, various image generators, Google’s Notebook LM, and code generation tools. They’re already grouped together on this page:
I’ve also written about the need to balance critique with creative uses of AI, including being more mindful about how we consume AI in order to make sure it isn’t something which is “done to us”.
And, for those interested in using GenAI but unsure of where to start, I’ve provided this list of the applications I regularly refer to or use in professional development sessions and conferences:
Resisting AI in Education
I spend about half my time writing ‘how to’ style posts and exploring the positive potential of AI, and the other half writing more critical posts like those that follow. If I’m being completely honest (and skimming through my recent posts) I have peaks and troughs of practice and critique. Lately, there seem to have been many more “critical” than practical posts. Maybe it’s just me being cranky as the year draws to a close…
Probably the most important critical posts on this site have been the Teaching AI Ethics series which also spawned an open access book at LibreTexts. The original post and the series of nine articles that followed discuss in detail many of the reasons why people resist or refuse AI:
I’ve also obviously got a personal chip on my shoulder about the implications of AI for writing and writing instruction (a big enough chip that it’s the subject of my PhD), and so it’s not surprising that often my anti-AI commentary comes out in posts like these:
And I think I’ve got good reason to be concerned. Technology isn’t neutral, and GenAI absolutely isn’t a benign technology. A lot of AI supporters will say that “technology isn’t good or evil, it’s the human users”, but I think that’s a simplification. Generative AI, in the form of Large Language Models, have been almost synonymous with one single company since 2022: OpenAI. And OpenAI is far from a neutral (or open) company).
As a writer, and a teacher of writing, I get fired up when OpenAI starts telling us what we should think about the future of writing.
It’s not just writing: it’s education and educators along with it. Of course, OpenAI isn’t the only culprit. Microsoft and Google have a long history of corporatising and co-opting education in both K-12 and Higher Ed, and OpenAI is pretty late to the party. But the narratives about how AI should and must be used by educators and students are being driven from outside of our sector, and with interests misaligned to teaching and learning.
Somewhere in the Middle
I encourage all educators to read widely, share stories, and make your own informed decisions.
You will hear people say, “you can’t use AI because ChatGPT consumes half a bottle of water per prompt!” Challenge throwaway comments – they represent moral panic and not fact (ChatGPT consumes an enormous amount of water, but it is around and estimated 500ml per 50 prompts, and of course varies on the length of the input/output). Google search in 2022 – pre AI integration – used 5.6 billion gallons (21 billion litres) of water. Were those same people yelling on social media about the environmental cost of Googling?
Instead of barbed comments thrown between educators, we would be better off directing that energy at the companies themselves. What is OpenAI doing to reduce its water use? Why do we need to rely on estimates, instead of transparent reporting from the company itself? What alternatives to ChatGPT exist that are more energy efficient and sustainable?
You will also, of course, hear people say, “if you don’t use AI you’ll be left behind!” Reject peer pressure and hype just as hard. Who are you falling behind exactly? Claims about boosts to productivity are mostly overblown (unsurprisingly, given many of the studies of AI-boosted efficiency are published by companies like Microsoft or the big consulting firms), and AI might just make you dumber. You can afford to rock back on your heels and wait to see how some of this plays out.
Join me up on the fence and admire the view for a while.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply to The Right to Resist – Leon FurzeCancel reply