It’s Uncomfortable on the Fence but at Least the View Is Nice

I often get asked to explain, or more accurately, defend my position on AI in education. I actually quite like sitting up here on the fence with a good view of both the pros and the cons. I’ve also got a lot of respect for people on either side of the fence: the critics who wouldn’t touch AI with a 10-foot virtual pole, and the early and enthusiastic adopters who’ll find ways for everything they possibly can. I think we benefit from both perspectives, but I just can’t make myself fall to one side or another.

My opinion on AI changes hourly, vacillating from “Oh my God, this is terrible” to “Hey, look at this cool thing I just did with ChatGPT.” Because I’ve been asked a few times in the past couple of weeks how I feel arguing both for and against AI, I thought I’d lay out some of those positions in a blog post. I’m sure that three or four minutes after I make this post, I’ll change my mind again.

minimalist illustration of a man sitting on a fence, dark blue silhouette against pale blue sky, soft, sketchy. Zoom out, and show crowds of silhouetted people waving placards on both sides of the fence. zoom out even further. lots of placards, wide angle establishing shot

Is AI Evil?

Short answer: yes. Long answer: still, yes.

I mean, it’s hard to view a technology that’s inherently racist, classist, sexist, bad for the environment and basically designed to line the pockets of a handful of billionaires and trillionaires as anything but a tool for corporate greed and oppression. You only have to look at the biggest AI developers – Microsoft, Google, Meta, Amazon, Apple – to understand that this is a technology built on our data, deployed without our permission, and then integrated into every digital technology, whether we want it there or not.

Through a callous disregard for copyright and intellectual and cultural property, AI companies have produced sprawling, great monsters of a technology which devours computational power, fresh water, and a tremendous amount of energy, producing artificial intelligence designed to compete the very people scraped to create it out of existence. And I mean, come on, that’s evil, right?

The obvious defence against this, and the one I hear the most often, is that technology is neutral, and it’s the people who wield the technology that determine whether it is good or bad, but in my fence-sitting opinion that’s kind of bollocks, and a bit of a cop-out.

There’s no inherent inevitability to this technology. It’s been designed, developed and deployed in such a way that means that much of its use can only be evil and harmful.

Sure, a low-income or minority demographic of students in a given country could try to use an artificial intelligence chatbot to educate their way out of poverty and inequality. But improving systemic structures around teaching, funding teachers and providing educational resources could do that too, and at a fraction of the environmental costs. So yes, this is a fundamentally problematic technology. If we want to avoid moralising and the word evil, then maybe we can settle on a few other adjectives: damaging, harmful, dangerous, unsustainable.

Is AI Useful?

The problem is it’s also really bloody useful. Having said that there could be systemic levers which could help disadvantaged students rather than AI, it might actually be simpler to just use AI in some cases. Because successive trials and failures have shown that it’s not as simple as just putting more teachers into classrooms or throwing more money at schools. The socio-economic, cultural and political pressures that shape these situations are just as likely to be ameliorated by access to better technology, right? Maybe.

One thing I do know is I personally find generative artificial intelligence very useful. I run professional learning for educators, and I’m studying a PhD. I’m a consultant with a very small and streamlined business, and I do the work of maybe half a dozen people relying heavily on large language models and image generation, particularly ChatGPT, Claude and Adobe Firefly. I write code with AI. I transcribe verbal drafts of articles with AI. I use it (sparingly) as a (frequently inaccurate) research tool.

And I’ve seen educators and students alike using it for great things. Maybe it’s not as useful as the technology companies would have us believe, and it certainly isn’t going to revolutionise the education system, but that doesn’t mean it’s without utility.

minimalist illustration of a man sitting on a fence, dark blue silhouette against pale blue sky, soft, sketchy. Zoom out, and show crowds of silhouetted people waving placards on both sides of the fence. zoom out even further. lots of placards, wide angle establishing shot

Is It Worth the Cost?

And so the question becomes, do the benefits outweigh the harms? I don’t know. Does the benefit of driving a petrol car outweigh the environmental cost? I live on a farm in regional Australia – without a car, I couldn’t get anywhere, and I couldn’t do a significant portion of my work. I could switch to an electric vehicle, but right now, I don’t feel confident with Australia’s laughable EV infrastructure, particularly out here in the country, and anyway, most of the time I’d be charging that car via recycled fossil fuels. But I’m building a house that’s totally off grid. Does that balance out my 2015 Nissan Qashqai?

I use social media to share these blog posts, even though I’m painfully aware of the harms done by social media to people’s mental health, the addictive qualities of these platforms and the way they siphon all of our personal data into the hands of those same companies building the artificial intelligence that I’m writing about. Do the benefits of using social media for my business and my own personal amusement outweigh the cost of my attention, or the way I feel when I suddenly realise that my kids have been watching me mindlessly scrolling for far too much time?

These tensions are nothing new to us. Personally, I think the healthiest thing we can do is poke and prod and be willing to sit on the fence for a little while. Since I started this PhD in 2022 (ahem, before ChatGPT was released. Just saying), I’ve been deeply immersed in reading about and exploring artificial intelligence technologies, more than most educators have had the time or had the inclination. But I still don’t have any answers. I still don’t have a firm yes or no or leaning in one direction or the other.

What I will say is this: if you see somebody sharing their experience with artificial intelligence, don’t rush in to shut them down with your own perspective. If somebody shares a handy prompt for ChatGPT, don’t immediately respond to them with an assortment of facts about the energy consumption of model training. Likewise, if somebody shares a post about inherent racial biases, don’t be too quick with the “whataboutisms”, ah but what about human bias? What about using a less biased AI? What about curating the training data more carefully? Because the person sharing the ChatGPT prompt and the person concerned about AI biases are both right.

And of course, our views on this technology are informed by our perspectives on everything else. I’m aware that my wobbly perch on the fence is still a position of privilege: I’m playing life on the lowest difficulty setting, after all.

I recently deleted my Twitter account and set up shop on Bluesky, alongside 15 or 20 million others in the past couple of weeks. It has been called an echo chamber, but I’ve found at least as far as AI is concerned there is still a diverse range of voices. There are:

  • Artists posting the reasons they do not and would never use artificial intelligence
  • Educators sharing the ways they’re using artificial intelligence in the classroom with students
  • Academics writing about artificial intelligence as a colonising tool that reinforces social prejudices
  • Academics exploring students’ perspectives on artificial intelligence and trying to figure out how they’re actually using it

All of these perspectives are valid, and whether you fall to one side of the fence or the other, or you cling desperately to the wobbly fence, we need as diverse an array of perspectives as possible. Because this technology is being enacted without our permission – the technology companies are not waiting for educators or students to give them the green light before releasing wave after wave of artificial intelligence – we need people who know how to use it and people who know why not to, and hopefully a few people like me who can’t really make their mind up.

Just my fortnightly reminder that we can be critical of AI without being snarky and derisive, and we can use AI without pouring that special sauce over everything we see. Education benefits from adopters, resisters, and middle…ers.

— Leon Furze (@leonfurze.bsky.social) November 19, 2024 at 12:46 PM

Come on over to Bluesky where this discussion originated – I moved there a week ago and it’s already home to some great AI in education discussions from both sides of the fence. If you’re not sure what Bluesky is or why you should bother, then check out this article first:

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨


Leave a Reply