Teaching AI Ethics: Bias and Discrimination

This is the first post in a series exploring the nine areas of AI ethics outlined in this original post. Each post will go into detail on the ethical concern as well as providing practical ways to discuss these issues in a variety of subject areas.

UPDATE: Here’s a pre-post-script to this post which raises an important point about bias in image generation. It comes from a DM conversation and subsequent comment on the post on LinkedIn:

Excellent comment via Lori Mazor on the image with this post – I’m bringing it out of our DM conversation because it’s an important point, especially in the context of this topic.

Lori highlighted the ‘white male’ bias of the image. I’d noticed the “whiteness” of the image, but not critically thought about the “maleness”. What’s interesting is that the prompt doesn’t contain any reference to people at all, male or female:

/imagine prompt: digital collage, glitch art, post-structuralist, wealth, pained, hegemony, power, tech post feature image, header image –ar 2:1 –q 2 –v 4 

Lori’s message has me wondering which words in the prompt have conjured the white male face. I’m going to guess that unfortunately it’s the combination of “wealth” and “power”.

My intent by using only abstract concepts in the prompt was to generate something random and broadly in-keeping with the theme (you’ll notice that the following posts in this series have a similar aesthetic). Some images contain female figures – I’m going to go back and explore which words have most likely drawn this out of the image gen.

On with the article!

As term one rapidly unfolds, the Artificial Intelligence boom kick-started in late 2022 by ChatGPT shows no sign of letting m qup. Since the start of term, we have seen the release of Microsoft’s new Bing Chat, and OpenAI has updated its terms and conditions to permit use by anyone over 13.

While AI can undoubtedly be a valuable tool in education, it’s important for educators to understand the ethical concerns that surround its use. We must ensure that we are using these technologies in ways that are responsible, just, and fair. The original Teaching AI Ethics post has proved hugely popular, but many educators from primary to tertiary have asked for more details on each of the nine areas. In this post, I’ll explore the first and most widely-known issue: bias and discrimination.

Here’s the original PDF infographic which covers all nine areas:

Algorithmic bias

One of the most pressing ethical concerns of AI is algorithmic bias. Algorithmic bias occurs when the data used to train AI systems reflects the biases and prejudices of society, resulting in discriminatory outputs.

ChatGPT is a prime example of an AI system that can suffer from algorithmic bias. It is a large language model that is trained on a massive dataset, including the “common crawl” which contains over 12 years’ worth of web pages. While these datasets give the models tremendous capabilities, they are inherently biased. Indiscriminately scraping the internet for data means that the dataset can contain racist, sexist, ableist, and otherwise discriminatory language. As a result, ChatGPT can produce outputs that perpetuate these biases and prejudices.

Moreover, AI models can reflect the biases and prejudices of society as a whole. Just like any other society, the online community underrepresents marginalised groups and overrepresents others. For instance, the prevalence of racism and bigotry on sources like Reddit and Twitter can bleed through the datasets and be reproduced in the output of AI models.

Algorithmic bias can also occur during the methods of training and reinforcement used when developing AI systems. For example, predictive policing systems used by law enforcement agencies in the US disproportionately target poor, Black, and Latinx communities, reinforcing existing systemic biases.

Different ways in which algorithmic bias can impact the creation ad output of AI systems (World Economic Forum, 2021)

Discrimination by default

These biases and harmful outputs don’t just happen on occasion. It seems that Large Language Models like OpenAI’s GPT are almost unavoidably biased. The tremendous volume of discriminatory, gendered, racist, and ableist language in the dataset means that models have a tendency to discriminate by default.

There are some organisations and communities trying to counteract this seemingly inevitable tendency. BLOOM, for example, is a model trained by BigScience through a “crowdsourced” dataset which had ethical guardrails in place from its inception, including avoiding potential biased datasets. This dataset is called ROOTS 1.61 terabytes of text including 46 languages.

Unfortunately, although BLOOM may be less biased than GPT, the jury is out on whether the bias has been removed entirely. BLOOM is also significantly less powerful than a model like GPT or Google’s LaMDA, and so it is less likely that people will use it as the basis for their own software.

Case Study: Predictive Policing in the US

Predictive policing is the use of data analysis, machine learning, and artificial intelligence (AI) to predict where crimes are most likely to occur and who is most likely to commit them.

It is used by law enforcement agencies to allocate resources and personnel, identify potential criminal suspects, and prevent crime before it happens. However, there are concerns about the potential for bias and discrimination in predictive policing algorithms, as well as questions about the legality and ethics of using AI to predict criminal behaviour.

Critics also argue that predictive policing can reinforce existing biases and inequalities in the criminal justice system, leading to unjust and discriminatory outcomes. This is because the datasets often include biases which are a product of systemic racism, including police mugshot databases with an inordinate amount of black people and people of colour.

In August 2016, a coalition of 17 organisations, including the American Civil Liberties Union (ACLU), issued a statement expressing concern about predictive policing tools used by law enforcement in the United States. The statement highlighted the technology’s racial biases, lack of transparency, and other flaws that lead to injustice, particularly for people of color.

The statement called for transparency about predictive policing systems, evaluation of their short- and long-term effects, monitoring of their racial impact, and the use of data-driven approaches to address police misconduct. The statement also emphasised the importance of community needs and the potential of social services interventions to address problems for at-risk individuals and communities before crimes occur.

Facial recognition technology poses special risks of disparate impact for historically marginalised communities, such as black individuals who are more likely to be stopped by police officers and are overrepresented in law enforcement databases. Recent studies demonstrate that these technical inaccuracies are systemic and biased against people with darker skin.

Companies have announced actions to improve the accuracy of their facial recognition algorithms and diversity of their training datasets, but the scope and effectiveness of such efforts vary across vendors.

There remains an ethical question of if or when it is appropriate to use facial recognition to address legitimate security concerns, regardless of its accuracy. Guardrails are needed to ensure more equitable use of enhanced surveillance technologies, including facial recognition.

This image has an empty alt attribute; its file name is Banner_with_time-1024x683.png
Check out the upcoming PD Teaching Writing in the Age of AI

Teaching AI Ethics

Each of these posts will expand on the original and offer a few suggestions of how and where AI ethics could be incorporated into your curriculum. Every suggestion comes with a resource or further reading, which may be an article, blog post, video, or academic article.

The next post in this series will explore the environmental costs associated with Artificial Intelligence and what companies are doing – or not – to mitigate the huge impact of AI. Join the mailing list for updates:

Processing…
Success! You're on the list.

Got a comment, question, or feedback? Get in touch:

8 responses to “Teaching AI Ethics: Bias and Discrimination”

  1. […] This is the second post in a series exploring the nine areas of AI ethics outlined in this original post. Each post goes into detail on the ethical concern and provides practical ways to discuss these issues in a variety of subject areas. For the first post on bias and discrimination, click here. […]

  2. […] Teaching AI Ethics: Bias and Discrimination Teaching AI Ethics: Environment Teaching AI Ethics: Truth and Academic Integrity […]

  3. […] can seem a little intimidating at first. There are also ethical concerns about copyright and bias to take into account, as with many AI technologies. But there’s no denying that Midjourney is […]

  4. […] Different models are trained on different combinations of datasets. For companies like OpenAI and Google, some of that information is proprietary. While we have some information on GPT-3’s training data, GPT-4 is more of a mystery, and Google’s PaLM is off-limits. But we do know a little about the kinds of data these behemoths* large models are trained on. We know, for instance, that they contain data from sources like the Common Crawl, The Pile, Wikipedia, and coding site GitHub. They may also be trained on social media sites like Twitter and Reddit. All of this comes with plenty of side effects, including the kind of bias and discrimination that I’ve written about elsewhere. […]

  5. […] Bias and discrimination […]

  6. […] the ethical concerns of artificial intelligence as a technology and an industry, including the inherent bias in large language models, the environmental impact of the technologies, the implications for […]

  7. […] written a fair bit about AI ethics, including the bias and discrimination in language models, the environmental impact of training AI, and the ongoing issues over copyright […]

  8. […] researcher in me is drawn to critique these technologies, which perpetuate bias, have huge environmental costs, risks to our personal privacy and safety, and take a heavy toll on […]

Leave a Reply

Discover more from Leon Furze

Subscribe now to keep reading and get access to the full archive.

Continue reading