Teaching AI Ethics: Human Labour

This is the eighth post in a series exploring the nine areas of AI ethics outlined in this original post. Each post goes into detail on the ethical concern and provides practical ways to discuss these issues in a variety of subject areas. For the previous post on affect recognition, click here.

When people think of Artificial Intelligence, the image that often springs to mind is that of sentient machines or shiny metallic robots, a depiction heavily influenced by popular culture. This narrative, along with language around “magical” or “mythical” AI, tends to overshadow actual pressing ethical issues associated with AI development and usage. This post will explore the exploitation of human labour in AI development, including low paid workers used for categorising and labelling data, and the impact of the AI infrastructure on human workers.

In the ongoing arms race towards creating autonomous AI systems, multinational technology corporations are relying on a lot of ‘ghost work.’ This term, coined by anthropologist Mary L. Gray and computational social scientist Siddharth Suri, refers to labour carried out by a “global underclass” of precarious workers. Occupying roles such as content moderators, data labellers, and delivery drivers, these workers often come from economically disadvantaged backgrounds and perform critical tasks for the tech industry at low wages and under suboptimal working conditions.

The way AI functions currently leans heavily on methodologies like statistical machine learning and deep learning through artificial neural networks. Such methods necessitate vast quantities of data. To obtain this data economically, platforms like Amazon’s Mechanical Turk have emerged, enabling ‘crowd work’ which involves breaking down large tasks into smaller units that can be handled by numerous workers.

The emergence of such platforms and data-labelling companies, however, has resulted in workers being treated like parts in a machine, rather than individuals with rights and needs. These workers are often subjected to constant surveillance and repetitive tasks and face punitive measures for any deviation from assigned tasks. The mental and physical toll can be considerable, especially for content moderators who are continuously exposed to traumatic content without adequate support systems in place.

This situation shines a light on a key issue in AI ethics: the exploitation of labor in the AI industry. It’s a stark reminder that the journey towards creating autonomous AI systems is not as ‘autonomous’ as it appears. It’s built on the labor of often exploited workers who, ironically, contribute to the development of AI systems that might eventually replace them.

Transnational worker organising efforts, research collaborations with workers, and public accessibility of research findings are some avenues that have been explored to address these challenges. An essential aspect of this conversation is the role of solidarity between high-income tech workers and their lower-income counterparts. There’s potential here for those with more influence within corporations to advocate for their colleagues who have less.

Here’s the original Teaching AI Ethics PDF infographic which covers all nine areas. Feel free to download and distribute:

Case Study: OpenAI’s Data Labelling

OpenAI, the company responsible for the enormously popular AI Large Language Model ChatGPT, has made great strides in the past 12 months with many forms of Artificial Intelligence. However, some of these achievements have raised significant ethical concerns regarding the exploitation of human labor and the handling of harmful content. This case study explores the findings of an excellent piece of investigative journalism published earlier this year in Time magazine.

Read the article: OpenAI Used Kenyan Workers on Less Than $2 Per Hour

GPT-3 was designed to demonstrate exceptional linguistic abilities, stringing together sentences in a strikingly human-like manner. It was trained on hundreds of billions of words scraped from the internet, a vast corpus of human language that I’ve written about in other posts. This method endowed GPT-3 with impressive language-processing skills but also became its largest setback, as it incorporated the internet’s toxicity and bias into its output.

To tackle these challenges, OpenAI aimed to construct an AI-powered safety mechanism, akin to the systems deployed by social media companies like Facebook to detect and remove hate speech and other forms of toxic language. The premise was straightforward: feed an AI with labelled examples of violence, hate speech, and abuse, and this tool could learn to identify and eliminate these forms of toxicity.

In November 2021, OpenAI began the process of creating this safety system. They sent tens of thousands of snippets of text to an outsourcing firm in Kenya, Sama. The text was pulled from various internet sources, including extremely harmful content describing graphic situations of abuse, murder, and self-harm. Sama, a San Francisco-based company, employs workers in Kenya, Uganda, and India to label data for Silicon Valley clients like Google, Meta, and Microsoft. While it brands itself as an “ethical AI” company and boasts of lifting over 50,000 people out of poverty, there are concerning elements surrounding its operations.

Sama’s data labellers, who were contracted to work on behalf of OpenAI, earned a take-home wage of approximately $1.32 to $2 per hour depending on seniority and performance. This rate was for work that involved labouring over harmful, potentially traumatising content. To learn about the full extent of the trauma on these workers, you should read the original article at Time magazine.

The case of OpenAI’s development of GPT-3 and its associated safety mechanism serves as an instructive example of the ethical challenges that permeate the AI industry. As technology companies continue to pursue advancements in AI, it is critical to scrutinise the labor practices that underlie these developments and to ensure that the quest for “ethical AI” does not overlook the wellbeing and fair treatment of the human workforce powering it

Anatomy of an AI System – Kate Crawford and Vladan Joler (2018) https://anatomyof.ai/

Anatomy of an AI System

“Anatomy of an AI System” is a large-scale map and long-form essay produced by Kate Crawford, a senior principal researcher at Microsoft Research, and Vladan Joler, Professor at the Academy of Arts at the University of Novi Sad in Serbia. The project, initially published in 2018, aims to illustrate the complex network of resources, labour, and data required to create and operate a single AI system, in this case, the Amazon Echo.

The essay and map unravel the lifespan of an AI system, from resource extraction to its eventual disposal. They highlight the environmental, labour, and data implications involved in the making of these technologies. This includes the vast infrastructure needed to train AI, the material and human costs, and the vast data collection involved in its operation.

A few key themes explored in the “Anatomy of an AI System” include:

  1. Material resources: The project outlines the extraction of Earth’s minerals used in the manufacturing of devices, as well as the environmental implications of these processes.
  2. Labour: It highlights the often invisible human labour, such as precarious ‘ghost work,’ involved in the creation, maintenance, and disposal of AI systems.
  3. Data: The study also underscores the extensive amount of personal data that AI devices collect from users, which further trains and enhances the AI.

The main conclusion of Crawford and Joler’s work is to emphasise that AI systems aren’t intangible or magical but are deeply rooted in Earth’s geology and underwritten by human labor and ingenuity. Through their project, they argue that we must critically examine the full lifecycle of AI systems to fully understand their social and ethical implications.

Teaching AI Ethics

Each of these ‘advanced level’ AI Ethics posts comes with three lesson activities that can be used to introduce students to these complex issues. In this post, I’m using three Project Zero thinking routines to explore the problem of human labour.

  1. Circle of Viewpoints Thinking Routine
    In this activity, students assume different roles or perspectives in the context of AI ethics and labor exploitation. For example, they might adopt the viewpoint of a tech CEO, a data labeller, an AI ethicist, a delivery driver, a consumer of AI products, and a policy maker. Each student, from their character’s viewpoint, discusses their perspective on the issue, its impacts, and possible solutions. This can help students understand the complexity of the issue and appreciate the diverse range of stakeholders involved. Afterward, a group discussion can be held where students share insights they gained from their assumed roles.
  2. Zoom In Routine for Unseen Labour
    For this activity, present students with images that indirectly represent the impact of AI on human labour, such as a Google search results page, an Amazon product page, or a snapshot of an AI chatbot interaction. Do not initially disclose the connection between these images and the concept of unseen labor in AI.
    Ask students what they see and note down their observations. Then, provide a little more context – tell them these products/services are powered by AI. Ask them again what they see and how their understanding has changed.
    Finally, introduce the concept of unseen labour in AI and explain how each image involves the contribution of countless unseen workers. Discuss how these workers’ efforts and challenges are typically obscured in the final product.
  3. Compass Points Routine on Proposed AI Regulations
    For this activity, propose a hypothetical regulation or policy that aims to improve the working conditions of people in the AI industry, such as mandatory mental health provisions for content moderators or minimum wage stipulations for data labellers. Use the Compass Points routine to explore this proposal from different angles:
    • E (Excited): What would be the positive outcomes of this policy? Who would benefit and how?W (Worried): What concerns might arise from implementing this policy? Could there be unintended negative consequences?N (Need to Know): What additional information do we need to fully understand the potential impact of this policy?S (Stance): After discussing the above points, what are our individual viewpoints towards this policy?
    After students have discussed each point, hold a class discussion about the complexities of creating fair working conditions in the AI industry and how they might be navigated.

The final post in this series will explore power, and how all of the ethical issues discussed in this series contribute to an AI hegemony. Join the mailing list for updates:

Processing…
Success! You're on the list.

I’m working with schools and universities across Australia and internationally to develop AI policies and processes around academic integrity. If you’d like to discuss advisory work or professional learning, get in touch via the form below:

3 responses to “Teaching AI Ethics: Human Labour”

  1. […] Teaching AI Ethics: Human Labour – Leon Furze […]

  2. […] Referring to the machine learning algorithms and datasets which underpin them as “artificial” distances the output from human responsibility, complicating the ethical debates about environmental impact, accountability, privacy, and the use human labour in the labelling of data. […]

Leave a Reply

Discover more from Leon Furze

Subscribe now to keep reading and get access to the full archive.

Continue reading