Over the next few months, I’ll be updating my 2023 Teaching AI Ethics collection. In this post, I’ll explain why the updates are necessary and give a recap on the nine original areas from the series.
When I wrote the original series in 2023, ChatGPT was only just on people’s radars. I had started my PhD exploring the implications of AI for teachers of writing and digital texts earlier in 2022, and although I was as surprised as anyone when ChatGPT was released in November of that year, I had already done a lot of reading around the ethical concerns of machine learning and artificial intelligence technologies more broadly.
The original series of articles came from that early research where I identified nine particular areas of concern that I thought educators should be aware of. They were:
- Bias
- Environmental concerns
- Academic integrity
- Copyright
- Privacy
- Datafication
- Emotion recognition
- Human labour
- Power
For the 2023 articles, which include still-relevant case studies and teaching ideas, check out the original series of posts:
The fact that I had to limit the series to only nine areas was an early red flag about how complex these technologies are. But like any technology, and particularly the digital technologies of the past two decades, artificial intelligence is not as simple as just throwing our hands up in the air and saying, “It’s too unethical, I just can’t use it.”
The growing ubiquity of generative artificial intelligence, including large language models and image generation, has meant that most educators will encounter the technology in their classrooms, whether they personally agree with its use or not.
Unfortunately, I’ve seen some educators, including prominent and influential educators on social media, saying that we should just stop talking about AI ethics because the technology is already everywhere. I think the ubiquity and seeming inevitability of the technology in and outside of education gives us more reason to focus on the ethics and discuss these issues with our students.
Why I’m Updating the Series
It’s been over two years since the release of OpenAI’s ChatGPT, and in the years since we have seen a frantic arms race between major technology companies to produce bigger, more powerful artificial intelligence models and to deploy them into every application and platform possible.
Following Microsoft’s multi-billion dollar investments in OpenAI, the partnership has extended into the Copilot chatbot already built into MS 365 products like Word, Excel, Outlook and PowerPoint and dominating everything from new Windows-based PCs and laptops to custom built “enterprise solutions” deployed across entire organisations.
Google has followed a similar trajectory with its Gemini model joining the party comparatively late, but already a core feature of Google’s Workspace applications such as Docs, Slides and Gmail and baked into the hardware of new Google Pixel phones. New Apple devices ship with both onboard language model-based AI and a partnership with OpenAI (with a competing partnership with Google also on the cards).
Meta, which controls Facebook, Instagram, WhatsApp and other applications, has deployed its Llama-based Meta AI to all of its users for free, whether they want it or not.
And yet, as rapidly as these technologies have grown, or perhaps because of it, the ethical concerns have not diminished.
Overt bias in leading models like OpenAI’s GPT has been squashed, but not removed. Guardrail band-aids have been applied over the top of deep-rooted dataset issues, issues which then bubble to the surface whenever new models are released or updates pushed out carelessly.

Image source: Emily Rand & LOTI / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Privacy, a pressing concern since day one, has been resurfaced again and again as companies have harvested the personal data of millions of users, and compiled it into their training data sets.
Copyright has perhaps been the most publicly overt ethical issue with whistleblowers revealing shady data collection practices and investigative journalism from outlets like The Atlantic publishing databases of illegally downloaded copyrighted books which have been consumed by these growing AI models.
And of course, just weeks before publishing this article, we saw the explosion of Studio Ghibli-style profile pics and memes across the Internet generated by OpenAI’s latest image model, which was clearly trained on individual frames of Miyazaki movies.
In the original 2023 series, I organised the nine areas in three stages: Beginner, Intermediate and Advanced. This was largely due to the availability of resources, research and information on each of those areas. At the time, it was quite easy to read about bias and the environmental costs of machine learning and comparatively difficult to find information on issues like emotion recognition or the human labor costs.
They were also organised that way because of their conceptual difficulty. These are intended to be teaching materials for use with both educators and students.
This time around, I’ll be running through the nine areas in the same order, but without the divisions between categories. It’s become very clear that all of these issues are as complex as one another, and all of them equal of discussion and study. It doesn’t matter which order you approach them in.
So in the coming weeks, I’ll be writing about bias, the environment and truth – broadening out from academic integrity to include deepfakes and misinformation.
It’s not all bad news, though. One positive of the explosion of generative artificial intelligence technologies in the last few years has been more public and academic discourse around these ethical concerns. So unlike the 2023 series, this time around, I’ll also be talking about what has changed for the better: companies, researchers and individuals who are pushing back against the systemic biases of generative artificial intelligence; organisations doing serious, meaningful work to make AI models more energy efficient and less environmentally harmful.
There are, in 2025, lots of examples of people doing great work to deal with the ethical concerns posed by these complex technologies.
Finally, I need your help. If you’ve read an article, conducted some research, done some work of your own, or if you have a particular question or area of concern you’d like to address, use the contact form below to let me know about it. Hundreds of thousands of people have read the original series. I hope hundreds of thousands more read these updates. I’d like it to be a community effort.
Subscribe to the mailing list for updates, resources, and offers
As internet search gets consumed by AI, it’s more important than ever for audiences to directly subscribe to authors. Mailing list subscribers get a weekly digest of the articles and resources on this blog, plus early access and discounts to online courses and materials. Unsubscribe any time.
Note on the images: In the 2023 version of Teaching AI Ethics I generated images in Midjourney. This time around, I have sourced images from https://betterimagesofai.org/. I still use AI image generators, but due to the environmental concerns and the contentious copyright issues discussed in these articles, I am more conscious of my use. Better Images of AI includes an excellent range of photos, illustrations, and digital artworks which have been generously licensed for commercial and non-commercial use.
Cover image for this article: Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Description from website: This piece features corporate power and profit sustained through exploitation, control and domination, lying in the hands of a few decision-making profiteers who are mostly white, and mostly men. Creative direction by AIxDESIGN – aixdesign.co. AI4Media https://www.ai4media.eu/ funded this commission which was made possible with funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 951911.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply