This is the first post in a series exploring the nine areas of AI ethics outlined in this original post. Each post will go into detail on the ethical concern as well as providing practical ways to discuss these issues in a variety of subject areas.
UPDATE: Here’s a pre-post-script to this post which raises an important point about bias in image generation. It comes from a DM conversation and subsequent comment on the post on LinkedIn:
Excellent comment via Lori Mazor on the image with this post – I’m bringing it out of our DM conversation because it’s an important point, especially in the context of this topic.
Lori highlighted the ‘white male’ bias of the image. I’d noticed the “whiteness” of the image, but not critically thought about the “maleness”. What’s interesting is that the prompt doesn’t contain any reference to people at all, male or female:
/imagine prompt: digital collage, glitch art, post-structuralist, wealth, pained, hegemony, power, tech post feature image, header image –ar 2:1 –q 2 –v 4
Lori’s message has me wondering which words in the prompt have conjured the white male face. I’m going to guess that unfortunately it’s the combination of “wealth” and “power”.
My intent by using only abstract concepts in the prompt was to generate something random and broadly in-keeping with the theme (you’ll notice that the following posts in this series have a similar aesthetic). Some images contain female figures – I’m going to go back and explore which words have most likely drawn this out of the image gen.
On with the article!
As term one rapidly unfolds, the Artificial Intelligence boom kick-started in late 2022 by ChatGPT shows no sign of letting m qup. Since the start of term, we have seen the release of Microsoft’s new Bing Chat, and OpenAI has updated its terms and conditions to permit use by anyone over 13.
While AI can undoubtedly be a valuable tool in education, it’s important for educators to understand the ethical concerns that surround its use. We must ensure that we are using these technologies in ways that are responsible, just, and fair. The original Teaching AI Ethics post has proved hugely popular, but many educators from primary to tertiary have asked for more details on each of the nine areas. In this post, I’ll explore the first and most widely-known issue: bias and discrimination.
Here’s the original PDF infographic which covers all nine areas:
Algorithmic bias
One of the most pressing ethical concerns of AI is algorithmic bias. Algorithmic bias occurs when the data used to train AI systems reflects the biases and prejudices of society, resulting in discriminatory outputs.
ChatGPT is a prime example of an AI system that can suffer from algorithmic bias. It is a large language model that is trained on a massive dataset, including the “common crawl” which contains over 12 years’ worth of web pages. While these datasets give the models tremendous capabilities, they are inherently biased. Indiscriminately scraping the internet for data means that the dataset can contain racist, sexist, ableist, and otherwise discriminatory language. As a result, ChatGPT can produce outputs that perpetuate these biases and prejudices.
Moreover, AI models can reflect the biases and prejudices of society as a whole. Just like any other society, the online community underrepresents marginalised groups and overrepresents others. For instance, the prevalence of racism and bigotry on sources like Reddit and Twitter can bleed through the datasets and be reproduced in the output of AI models.
Algorithmic bias can also occur during the methods of training and reinforcement used when developing AI systems. For example, predictive policing systems used by law enforcement agencies in the US disproportionately target poor, Black, and Latinx communities, reinforcing existing systemic biases.

Discrimination by default
These biases and harmful outputs don’t just happen on occasion. It seems that Large Language Models like OpenAI’s GPT are almost unavoidably biased. The tremendous volume of discriminatory, gendered, racist, and ableist language in the dataset means that models have a tendency to discriminate by default.
There are some organisations and communities trying to counteract this seemingly inevitable tendency. BLOOM, for example, is a model trained by BigScience through a “crowdsourced” dataset which had ethical guardrails in place from its inception, including avoiding potential biased datasets. This dataset is called ROOTS 1.61 terabytes of text including 46 languages.
Unfortunately, although BLOOM may be less biased than GPT, the jury is out on whether the bias has been removed entirely. BLOOM is also significantly less powerful than a model like GPT or Google’s LaMDA, and so it is less likely that people will use it as the basis for their own software.
Case Study: Predictive Policing in the US
Predictive policing is the use of data analysis, machine learning, and artificial intelligence (AI) to predict where crimes are most likely to occur and who is most likely to commit them.
It is used by law enforcement agencies to allocate resources and personnel, identify potential criminal suspects, and prevent crime before it happens. However, there are concerns about the potential for bias and discrimination in predictive policing algorithms, as well as questions about the legality and ethics of using AI to predict criminal behaviour.
Critics also argue that predictive policing can reinforce existing biases and inequalities in the criminal justice system, leading to unjust and discriminatory outcomes. This is because the datasets often include biases which are a product of systemic racism, including police mugshot databases with an inordinate amount of black people and people of colour.
In August 2016, a coalition of 17 organisations, including the American Civil Liberties Union (ACLU), issued a statement expressing concern about predictive policing tools used by law enforcement in the United States. The statement highlighted the technology’s racial biases, lack of transparency, and other flaws that lead to injustice, particularly for people of color.
The statement called for transparency about predictive policing systems, evaluation of their short- and long-term effects, monitoring of their racial impact, and the use of data-driven approaches to address police misconduct. The statement also emphasised the importance of community needs and the potential of social services interventions to address problems for at-risk individuals and communities before crimes occur.
Facial recognition technology poses special risks of disparate impact for historically marginalised communities, such as black individuals who are more likely to be stopped by police officers and are overrepresented in law enforcement databases. Recent studies demonstrate that these technical inaccuracies are systemic and biased against people with darker skin.
Companies have announced actions to improve the accuracy of their facial recognition algorithms and diversity of their training datasets, but the scope and effectiveness of such efforts vary across vendors.
There remains an ethical question of if or when it is appropriate to use facial recognition to address legitimate security concerns, regardless of its accuracy. Guardrails are needed to ensure more equitable use of enhanced surveillance technologies, including facial recognition.

Teaching AI Ethics
Each of these posts will expand on the original and offer a few suggestions of how and where AI ethics could be incorporated into your curriculum. Every suggestion comes with a resource or further reading, which may be an article, blog post, video, or academic article.
- Legal studies: What legal precedents exist for protecting marginalised groups from discrimination? Is a biased algorithm legal?
- English and Literature: How have certain groups been silenced or oppressed throughout history? What is the implication of this “gap” in the written record of the internet when it is used as data to train an AI?
- Mathematics: What is an algorithm? How do algorithms and probability link to policing and other societal functions?
- Social Studies: How does systemic bias and discrimination affect different groups in society? How can AI perpetuate or challenge these biases?
- Computer Science: How can AI models be designed to avoid algorithmic bias and discrimination? What ethical considerations should be taken into account when designing AI systems?
- Philosophy: What ethical theories can be applied to the use of AI in society? How can we balance the benefits and risks of AI, particularly when it comes to bias and discrimination?
- Science: How can data collection and analysis be used to address bias and discrimination in AI systems? What role do scientists and researchers play in ensuring ethical use of AI?
- Business and Economics: How does bias and discrimination in AI affect the market and business outcomes? What economic incentives exist for companies to ensure ethical AI practices?
- Media Studies: How do media representations of different groups contribute to bias and discrimination in AI? How can media literacy be used to address these issues?
- Psychology: How does bias and discrimination affect individuals and society? How can we design AI systems that take into account psychological factors such as implicit bias?
- Health and PE: How might AI be used in healthcare and what are the potential ethical implications? How might AI discriminate against people in the healthcare system?
The next post in this series will explore the environmental costs associated with Artificial Intelligence and what companies are doing – or not – to mitigate the huge impact of AI. Join the mailing list for updates:
Got a comment, question, or feedback? Get in touch:
Leave a Reply