Teaching AI Ethics 2025: Bias

Synopsis
This first instalment in the Teaching AI Ethics 2025 series revisits the theme of bias in generative AI. It explains how data bias, model bias and human bias interact to produce skewed or discriminatory outputs in large-language and image-generation systems, illustrates those problems with up-to-date research and examples, critiques the limitations of current “guard-rail” fixes, and closes with practical ways teachers can embed critical discussions of AI bias across English, Mathematics, Civics, Visual Arts and other subjects.

Originally published at: https://leonfurze.com/2025/05/05/teaching-ai-ethics-2025-bias/

Links
https://leonfurze.com/ai-ethics/
https://leonfurze.com/2025/04/23/teaching-ai-ethics-2025-introduction/
https://dl.acm.org/doi/10.1145/3442188.3445922
https://academic.oup.com/pnasnexus/article/3/9/pgae346/7756548
https://en.wikipedia.org/wiki/List_of_cognitive_biases
https://www.deeplearning.ai/the-batch/imagenet-gets-a-makeover/
http://midjourney.com
https://help.openai.com/en/articles/8313359-is-chatgpt-biased
https://docs.anthropic.com/en/release-notes/system-prompts
https://github.com/jujumilk3/leaked-system-prompts
https://github.com/jujumilk3/leaked-system-prompts/blob/main/google-gemini-1.5_20240411.md
https://www.technologyreview.com/2023/02/24/1069093/ai-image-generator-midjourney-blocks-porn-by-banning-words-about-the-human-reproductive-system
https://msmagazine.com/2024/08/01/chatgpt-thinks-doctors-are-male-sexism-women-artificial-intelligence/
https://www.sciencedirect.com/science/article/pii/S258975002300225X
https://doi.org/10.1007/s43681-024-00531-5
https://arxiv.org/abs/2403.02726

2 responses to “Teaching AI Ethics 2025: Bias”

  1. […] the infringement of copyright and intellectual property rights in the creation of these models, the bias, the privacy concerns, the surveillance and problematic data collection practices of these […]

  2. […] questions about algorithmic bias, self-fulfilling prophecies, and student agency. For example, machine learning models are less accurate at predicting success for racial and cultural-linguistic m…, meaning these systems may systematically disadvantage certain groups. When an algorithm flags a […]

Leave a Reply