This post is the second in a series of nine articles updating the “Teaching AI Ethics” resources published in 2023. In the original series, I presented nine areas of ethical concern based on research into artificial intelligence, including the growing technologies of generative artificial intelligence.
The environmental concerns of artificial intelligence are one of the most important ethical issues, but also one of the most difficult to talk about. We know that artificial intelligence applications, like all of our digital technologies, have a huge impact on the environment, but because of a lack of transparency and accountability in the industry, it’s very difficult to pinpoint exactly what that impact looks like.
In this article, I will look at research that has taken place between the original 2023 post and 2025 to try to shed some light on what we really know about the training costs and use of AI. I will also point out some of the misinformation and inaccuracies which frequently pop up when I run conferences and presentations on artificial intelligence, because since 2023 there have also been attention-grabbing headlines in the media which misrepresent the impact of AI.
In education, we need to have reasoned, rational discussions with students about the future of artificial intelligence technologies.
Table of contents
Artificial Intelligence is an extractive technology
In 2023 I referred to Kate Crawford’s book, Atlas of AI, and it still remains for me one of the most important texts in this discussion. In Atlas, Crawford draws analogies between artificial intelligence and the mining industry, focusing on both the environmental and the socio-political impacts. One important aspect of the book is that Crawford is talking much more broadly about artificial intelligence than the current popular use of the term, which often refers to generative AI and sometimes discrete applications like ChatGPT.
When Crawford writes about the environmental impact of AI, she is talking about:
- Recommender systems, such as those powering Netflix and Spotify
- Predictive algorithms like those which determine what pops up on your Facebook feed or YouTube
- Broader implications of data classification – such as labelling images – and analytics
In this article on environmental issues, I will talk a lot about generative artificial intelligence, like text and image generation, but ChatGPT, Copilot, Gemini, Midjourney and so on, are only part of the bigger AI environment puzzle, and currently not the biggest piece.
In drawing analogies to the mining industry, Crawford points out some key ideas:
- The infrastructure behind the cloud is made up of data centres, which consume phenomenal amounts of electricity and which rely on water to cool the systems.
If you’ve ever tried to do something computationally taxing on a laptop, you’ll know how hard a computer fan has to work to keep things cool. Imagine that scaled up by orders of magnitude, and you can picture how much waste heat is generated by data centres. Companies like Microsoft have built data centres in cold climate countries and even attempted to build them underwater in efforts to reduce the impact of cooling these systems. But just like the mining industry, the use of data centres to compute requirements for artificial intelligence is incredibly resource heavy. - The hardware used to run artificial intelligence depends on extractive mining processes.
Lithium, silicates, rare minerals and other elements are used to build the hardware that runs artificial intelligence, whether it is graphics processing units used to train generative AI models or the iPhone that you use to converse with ChatGPT. The physical cost of mining these ingredients is incredibly high, as is the e-waste produced when these technologies fall out of their short life cycles. - Where mining extracts oil, artificial intelligence extracts data. Although this is not tied to the environmental conversation, I will refer to this when I update the articles on datafication and privacy. Every industry produces some sort of product. One product of the artificial intelligence industry is data. The raw material for that data is our personal information and, linking back to the environmental concerns, it is the processing of this raw material which consumes the most energy.
Taking Kate Crawford’s position of artificial intelligence as an extractive technology akin to the mining industry as a starting point, let’s look at the two main areas where generative artificial intelligence has an impact on the environment.

Training-time impact
One of the ways in which OpenAI’s GPT-3.5, the model publicly released as ChatGPT, changed the trajectory of generative AI research was that they threw more data and more computational power at their system than any researchers had before. They effectively demonstrated that scaling up the volume of raw material and simultaneously increasing the amount of computation used to learn the rules of that training material could significantly improve the output of the models. Suddenly, a generative pre-trained transformer (GPT) went from an adequate way to process natural language and replicate simple parts of human speech to an incredibly versatile technology fluent in dozens of languages and much more capable as a chatbot. Similar ideas of scale have been applied to image generation.
But just as burning more coal produces not only more electricity, but also more pollution, burning through more raw materials of AI training data produces more adverse effects.
In 2024, an article was published which estimated how much energy would be required to train a GPT-4 class model, OpenAI’s March 2023 release.
Despite how people have interpreted the news, the paper itself does not give a hard number for training a GPT-4–class model. Instead, it:
- Points to the scale of the problem in qualitative terms. The article notes that models “such as OpenAI’s GPT-4” need “a staggering amount of electricity,” stressing that their training clusters can draw 7–8 times more power than a typical high-performance computing workload. These are estimates based on earlier research.
- Provides the last solid public benchmark – GPT-3 – as a yard-stick. Citing the 2021 Google/Berkeley study, it reminds readers that training GPT-3 burned 1,287 MWh of electricity and emitted about 552 t CO₂e (roughly the annual electricity use of 120 average U.S. homes).
Media articles like this from the NY Times spread virally, and it is a conversation frequently brought up in the sessions on artificial intelligence. Training large language models, particularly the huge multi-modal state of the art models like GPT, uses a great deal of energy. Training time impacts also include the aforementioned water use. Again, estimates emerged in 2024 as to roughly how much water would be required, based on comparisons to open access research and other technologies.
One problem with the media headlines has been that people take them at face value when they is not necessarily representative of the true impact of these technologies, or when they misinterpret the research. Most of these estimates are assumptions based on completely different models, for example, OpenAI’s GPT-2 or GPT-3, which had much more public information before “Open” AI closed its doors. These estimates could either be lower or higher than the reality: Lower because new AI models might well train much more intensively and on more data than we are aware; higher because new efficiencies in the algorithms and processes may actually reduce some energy requirements of training new models.
The point is that we can’t know for sure. So when somebody says to me, “Have you heard that ChatGPT consumes 500ml of water per prompt?” my first response is to challenge that specific number, not because I want to let OpenAI off the hook, but because I think supposition is not helpful in this conversation.
The important point is that, cumulatively, data centres consume 3-4% of all US energy, and are using so much water that they are quite literally sucking entire regions dry.
Inference-time impact
This is a much more recent conversation. Inference time means “the point at which the user gets the AI to do the thing”: the point at which the user inputs a prompt to ChatGPT and the chatbot responds; the point at which the user generates an image in a platform like Firefly or Midjourney; or the point at which a company deploys OpenAI’s GPT model to conduct sentiment analysis on customer satisfaction data.
Recently, there has been an uptick in interest regarding how much energy these systems use when we use them. Again, there is a complete lack of transparency and a lot of confusing press around these issues. To try to cut through some of that noise, I am going to refer only to one article, a piece of research titled Power Hungry Processing: ⚡️Watts ⚡️Driving the Cost of AI Deployment? by Sasha Luccioni, Yacine Jernite, and Emma Strubell.
I’m using this article because it is one of the only I have read thus far which uses sound methodology and publicly available information to produce information on how much different types of artificial intelligence models consume. However, a collection of articles from MIT has also recently been released which covers a lot of areas including comparisons of different models and the impacts of data centres. It’s well worth reading in full.
https://www.technologyreview.com/supertopic/ai-energy-package
Before I get into the details of the Luccioni article, some context:
In this 2024 article, Luccioni et al. refer to a variety of AI systems. Like Crawford’s Atlas of AI, they are not just talking about generative artificial intelligence. They draw comparisons between:
- Text classification
- Image labelling
- Question and Answer (QA) chatbots
- Text summarisation
- Masked language modelling
- Object detection
- Text generation
- Image generation
- Task-specific models
- Multi-purpose models
That sounds like a lot, but it’s an important piece of this conversation around the environmental impact of generative AI. Many of these technologies, such as text classification, image classification and QA, are well established areas of machine learning, which have been used for a decade or more in many industries. “Masked language modelling” is often used in sentiment analysis: looking at large text-based datasets to determine whether the language is positive, negative, or neutral. These are “old fashioned” AI systems, some of which are being replaced in some situations by multi-purpose large language models like GPT.
There are two important implications of this for this research:
- The researchers used models which are not directly comparable to state-of-the-art, proprietary models.
For example, to test text generation, the researchers used GPT-2, an earlier model from OpenAI. They also used models like BLOOM and some of Facebook’s early LLMs. These text generation models are significantly smaller and less complex than GPT-4, Llama, Google Gemini and so on. You cannot draw a like-for-like comparison of the energy consumption of GPT-2 to the energy consumption of GPT-4. - It is reasonable to compare and contrast energy consumption across task types and model classes.
This is the real point of the article – to demonstrate that per 1000 inferences, the tasks and model types vary wildly. Therefore the point is not “X number of generations only consume X amount of energy,” but rather “X generations consume significantly more energy than task-specific models when used for the same job.”
How much energy do different models consume?
Here’s a quick look at how much energy these models consume without getting too bogged down in the technical details. The graph below, taken from Luccioni et al.’s paper, shows the model emissions for different task types measured in grams of CO2.

For me, this paper represents a much more important conversation than “ChatGPT consumes X millilitres of water” or “every prompt uses this much electricity.” The authors are not trying to say that machine learning or generative AI research should stop, or that individual users should be held accountable for how many times they use ChatGPT in a day. They are pointing out that generative artificial intelligence, especially multi-purpose generative AI that includes image generation, are incredibly resource intensive compared to other technologies which may be better suited to the job.
This is so important that I’m going to repeat it in big bold letters and then explain what that means for education.
Generative artificial intelligence systems are incredibly resource intensive compared to other technologies which may be better suited to the job.
Implications of AI energy use in education
I have given presentations where I point out that an individual’s use of ChatGPT may consume less energy overall than, for example:
- Streaming Netflix for one hour
- A Zoom call with 10 people
- Driving a car for a few miles
I believe that this is true based on legitimate research, including this synthesis of research from educator and author Jon Ippolito. Ippolito arrived at slightly different numbers than Luccioni using existing studies, but the numbers are comparable.
However, I think that this “individual use” narrative obscures the bigger problem, which is that a lack of transparency and regulation regarding artificial intelligence means we have no idea what energy consumption looks like at scale in education.
We have a responsibility to raise these concerns with students and to carefully consider how we deploy these technologies across education systems. Rather than placing the burden on individual teachers, we need to ask whether deploying large language models across an entire education sector – such as Estonia has recently agreed to – is the best use of resources. Many of the tasks we use these models for could be achieved with less energy-intensive technologies, or even without technology at all.
Do we need better image generation models for lesson resources, or more reliable sources of open access images? Do we need to use ChatGPT to analyse parent surveys, or is there an older, more efficient technology that would do the same? Should I use Gemini to write a lesson plan, or sit down with a colleague over a coffee and do it myself?

Cristóbal Ascencio & Archival Images of AI + AIxDESIGN / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Teaching Environmental Issues
In the original 2023 collection, each article ended with a selection of ideas for teaching the issue in the context of existing curriculum areas. These 2025 updates will similarly align concepts from the articles to standards from typical curricula across the world, and in particular the Australian, UK, US, and IB curricula. For the readers teaching in Higher Education, these examples will also be suitable across a wide range of disciplines.
English
In the English curriculum, students critically evaluate texts, perspectives, and themes, making it an ideal space to explore environmental narratives and “greenwashing” in media. Students can analyse persuasive or expository texts on digital sustainability, asking “Is this tech company truly ‘green’?”, “How do tech advertisements use environmental language to mislead?”, or “What narratives are missing in the conversation about e-waste?” These lend themselves well to persuasive writing, media analysis, or debates on digital environmental responsibility.
Science (Environmental Science / Earth & Space Science)
Science courses frequently focus on sustainability, ecosystems, and the impact of human activity on the planet. This aligns naturally with questions such as “What is the carbon footprint of cloud computing?”, “How do server farms affect local ecosystems?”, or “Can digital technology help monitor and mitigate climate change?” Students can conduct research projects comparing environmental costs and benefits of digital solutions or use data sets to model pollution caused by e-waste.
Digital Technologies / Computer Science
Digital Technologies curricula (including the Australian, UK, and US) explicitly include sustainability and environmental impact. Students might examine “How energy-efficient is the code we write?”, “What happens to old hardware when we upgrade?”, or “Can we design low-impact apps or systems?” Projects could include building sustainable tech prototypes, auditing energy use in computing, or exploring circular design principles in software and hardware development.
Geography
Geography investigates human-environment interactions and is a great context for studying digital technologies’ ecological footprints. Students could ask, “Where do raw materials for smartphones come from?”, “How does digital infrastructure affect urban and rural land use?”, or “What role do satellites and GIS play in environmental monitoring?” Case studies on mining for rare earth elements, digital deserts, or tech-fuelled deforestation would deepen geographical inquiry skills.
Design and Technologies
This subject encourages students to design solutions with awareness of social and environmental sustainability. Key teaching questions include “How can we reduce the lifecycle impact of tech products?”, “What is eco-design in the context of digital devices?”, or “How can we apply the principles of ‘cradle-to-cradle’ or closed-loop design to electronics?” Students might engage in sustainable redesign challenges or audit the energy use of different design tools.
Civics and Citizenship
As students examine democratic responsibility, rights, and participation, they can explore digital environmental justice: “Who bears the environmental burden of digital consumption?”, “Should governments regulate e-waste exports?”, or “What policies support equitable access to green technologies?” These topics are excellent for role-play debates, policy pitches, or mock UN climate tech summits.
Mathematics (Statistics & Data)
In mathematics, students interpret data and trends, opening up questions like “How much CO2 is generated by a Google search?”, “What do energy use graphs of tech companies reveal?”, or “How can we model the global growth of e-waste?” Students could analyse real-world datasets on energy consumption, digital product lifecycles, or climate projections influenced by digital tools.
Visual Arts / Media Arts
Visual Arts and Media Arts students often explore themes of communication, critique, and message. Prompts might include “How can digital art raise awareness of e-waste or tech pollution?”, “What does digital decay or “digital plastic” look like?”, or “Can we create installations from recycled AI output?” Students could design visual campaigns, create artworks from obsolete tech, or critically assess how digital art platforms contribute to or challenge environmental issues.
Theory of Knowledge (IB)
In TOK, students explore how knowledge is constructed and evaluated, making it fertile ground for questions like “How does digital surveillance impact the environment and our understanding of ethical responsibility?”, or “Do we have a moral obligation to consider the environmental cost of digital knowledge systems?” These are excellent for essays, presentations, and cross-disciplinary inquiry.
Note on the images: In the 2023 version of Teaching AI Ethics I generated images in Midjourney. This time around, I have sourced images from https://betterimagesofai.org/. I still use AI image generators, but due to the environmental concerns and the contentious copyright issues discussed in these articles, I am more conscious of my use. Better Images of AI includes an excellent range of photos, illustrations, and digital artworks which have been generously licensed for commercial and non-commercial use.
Cover image for this article: Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

All articles in this series are released under a CC BY NC SA 4.0 license.
Subscribe to the mailing list for updates, resources, and offers
As internet search gets consumed by AI, it’s more important than ever for audiences to directly subscribe to authors. Mailing list subscribers get a weekly digest of the articles and resources on this blog, plus early access and discounts to online courses and materials. Unsubscribe any time.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:


Leave a Reply