This is an updated post in the series exploring AI ethics, building on the original 2023 discussion of human labour. Since 2023, the human cost of AI has become one of the most pressing ethical issues in the industry, with major lawsuits, union formation, and investigative journalism bringing the exploitation of workers into public view. This post explores how the AI supply chain depends on underpaid and often traumatised workers, and what that means for education.
Content warning: this article contains discussions of abuse and violence, and may need to be adjusted for use with younger students. All Teaching AI Ethics articles are released CC BY NC SA 4.0 and may be remixed, edited, and shared for educational purposes.
For the 2023 articles, which include still-relevant case studies and teaching ideas, check out the original series of posts:
When I wrote the original post on human labour in 2023, OpenAI’s use of Kenyan workers earning less than $2 per hour had only recently come to light. Since then, the situation has become both clearer and darker. We now have lawsuits working their way through courts in multiple countries, a global union of content moderators, investigative journalism exposing conditions across the AI supply chain, and a new book – Feeding the Machine by Mark Graham, James Muldoon, and Callum Cant – that exposes the human exploitation at the heart of the AI industry.
The title of the 2024 book is appropriate: generative AI is a machine that must be fed, and what it feeds on is both human data and labour. Every chatbot you use, every image generator you prompt, every AI-powered recommendation you receive has been shaped by the hands and minds of workers who are, in most cases, invisible, underpaid, and often deeply harmed by the work they do.
In this updated article, I will explore what we now know about the AI labour supply chain, the workers who power it, and the growing movements for justice and accountability. As with the other articles in this series, I will finish with teaching examples that connect these issues to existing curricula.
The Hidden Workforce
Big Tech has sold us the illusion that artificial intelligence is a frictionless technology – machines training machines, algorithms improving themselves, intelligence emerging from… nowhere. But hidden beneath this smooth surface and vague metaphors of “the cloud” lies the grim reality of a precarious global workforce of millions labouring under often appalling conditions to make AI possible.
What Do AI Workers Do?
The workers powering AI fall into several categories, each performing tasks that machines cannot (yet) do for themselves.
The most widespread form of AI labour is data labelling and annotation. Before an AI can learn to recognise a stop sign, a cat, or a tumour, humans must label millions of images, identifying and tagging each object. This work includes categorising text, annotating conversations, classifying emotions, and identifying objects in video footage. It is tedious, repetitive work that requires concentration and accuracy, and it pays as little as one cent per task.

Content moderation represents some of the most psychologically demanding work in the AI supply chain. For AI systems to generate “safe” outputs, they must first learn what is unsafe. This means humans must review and label the most horrific content the internet has to offer: child sexual abuse material, beheadings, torture, bestiality, extreme violence, and hate speech. Workers describe reading hundreds of pieces of such content per day. The goal is to teach AI to filter this material out, but the human cost is severe.
Reinforcement Learning from Human Feedback, or RLHF, is the technique that made ChatGPT possible. Human workers evaluate AI outputs, judging which responses are helpful, harmful, safe, or nonsensical. They teach models how to “sound human” and stay within moral boundaries. This work requires cultural knowledge, language fluency, and judgement: skills that command high wages in wealthy countries, but are paid at a fraction of that rate when outsourced.
Finally, synthetic dialogue creation involves workers writing thousands of sample conversations that AI can learn from, making them the uncredited ghostwriters behind every chatbot. They create the examples that teach AI how humans actually talk.
Where Are These Workers?
The AI labour supply chain follows familiar patterns of global exploitation. Companies like OpenAI, Meta, Google, and Microsoft contract with intermediary firms such as Sama, Scale AI, Majorel, and Teleperformance, which in turn hire workers in countries with lower wages and fewer labour protections.
Kenya has emerged as the most prominent hub for AI data work, with Nairobi hosting operations for Meta, OpenAI, TikTok, and numerous other companies. The Philippines, with its large English-speaking workforce, has long been a destination for outsourced digital labour and continues to play a significant role in the AI supply chain. India similarly serves as a major hub for data labelling and annotation work.
Venezuela presents a particularly troubling case: Scale AI specifically targeted Venezuelan workers during the country’s devastating economic crisis, capitalising on desperate circumstances to secure labour at rock-bottom prices. Meanwhile, Uganda, Ghana, Rwanda, and Nigeria represent emerging markets for AI data work as companies seek new sources of cheap labour. Pakistan has also hosted AI data operations, though workers there experienced the precarity of this industry firsthand when Remotasks abruptly withdrew from the country in 2024.
The wage disparity is staggering. According to research compiled by Privacy International, US-based annotators might make $10-$25 an hour, while Kenyan annotators make as little as $1-$2 an hour for the same work. Companies like Mercor, which hire workers from Canada, Europe and the United States, pay $16-$25 per hour for basic annotation, $40-$75 per hour for RLHF evaluation, and $60-$120 per hour for expert contractors. Meanwhile, workers in the Global South doing identical tasks earn a tiny fraction of these amounts.

Case Studies
OpenAI and Sama: The Workers Who Made ChatGPT “Safe”
The story that brought AI labour exploitation to global attention was TIME magazine’s January 2023 investigation into OpenAI’s use of Kenyan workers to make ChatGPT less toxic. I wrote about this in the 2023 article, but it is worth revisiting in 2026.
GPT-3, ChatGPT’s predecessor, had a significant problem: it was prone to producing violent, sexist, and racist output because it had been trained on hundreds of billions of words scraped from the internet. To address this, OpenAI needed to build a safety system: an AI that could detect and filter out toxic content. To train this safety system, humans needed to label tens of thousands of examples of the most harmful content imaginable.
OpenAI contracted this work to Sama, a San Francisco-based company that employs workers in Kenya, Uganda, and India. Sama markets itself as an “ethical AI” company and claims to have helped lift thousands of people out of poverty. But the reality for workers was different.
Beginning in November 2021, Sama workers in Nairobi were sent snippets of text describing child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. They were paid between $1.32 and $2 per hour to read this material, label it, and feed it back to OpenAI.
As one worker told TIME: “That was torture. You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.”
Richard Mathenge, who led one of the teams training OpenAI’s models, later described the experience to Slate: “I can tell when my team is not doing well, I can tell when they’re not interested in reporting to work. My team was just sending signals that they’re not ready to engage with such wordings.”
Mophat Okinyi, a quality-assurance analyst on Mathenge’s team, reported developing insomnia, anxiety, depression, and panic attacks from the work. He told journalists that the repeated exposure to explicit text had lasting effects on his mental health.
Critically, OpenAI told reporters it believed it was paying Sama contractors $12.50 per hour. Workers say they actually received approximately $1-$2 per hour, sometimes less. This enormous discrepancy raises serious questions about where the money went.
Sama cancelled its contract with OpenAI in February 2022, eight months earlier than planned, partly due to the traumatic nature of the work and partly because of media attention. The company also announced it was exiting the content moderation business entirely. But for the workers who did the job, the damage was done.

Scale AI and Remotasks: The Billionaire’s Sweatshop
If OpenAI’s use of Sama was the story that broke open the AI labour issue, Scale AI represents the industrialisation of exploitation.
Scale AI, another San Francisco-based data labelling company founded by Alexandr Wang, has become a behemoth in the AI supply chain. The company was valued at $13.8 billion in 2024 and has partnered with OpenAI, Meta, Google, Microsoft, and the US Department of Defense. Wang became the world’s youngest self-made billionaire.
Scale AI operates through subsidiaries including Remotasks (for computer vision and autonomous vehicle data) and Outlier (for LLM data annotation). These subsidiaries operate like platforms connecting “taskers” with labelling jobs, without visibility identifying that the work is actually for Scale AI or its major corporate clients.
As the Oxford Internet Institute’s Fairwork project found, Remotasks scored just 1 out of 10 on fair labour practices, failing on key metrics including its ability to fully pay workers.
The problems with Scale AI’s labour practices have been documented extensively. MIT Technology Review reported that workers’ hours were undercounted, lowering their weekly earnings. They were held to high standards with risk of suspension for not being fast or precise enough. As one Venezuelan worker told investigators: “I realized that their approach was to drain each user as much as possible.”
In March 2024, Remotasks abruptly shut down operations in Kenya, Nigeria, and Pakistan with minimal notice. Workers received a cold email stating: “We are reaching out with an important announcement regarding Remotasks operations in your location. We are discontinuing operations in your current location effective March 8, 2024.” Thousands of workers who depended on the platform for their livelihoods were stranded without job security or, in many cases, owed wages.
Workers in multiple countries reported not receiving their final payments when contracts ended or when they were suspended from the platform. One worker showed MIT Technology Review screenshots of an eight-month-long payment dispute that customer service ultimately marked as “resolved” without her ever receiving her money.
In January 2025, Scale AI was sued in US federal court by contractors alleging the company violated worker safety laws by exposing them to emotionally distressing content while training AI tools for Meta and Google. The lawsuit alleges workers developed PTSD, depression, anxiety, and other mental health problems. Contractors were told mental health counselling would be provided but no such offering materialised.
In June 2025, Meta agreed to purchase a 49% stake in Scale AI for $14.8 billion, raising further questions about accountability for these labour practices.
Meta, TikTok, and Content Moderation
While much attention has focused on the workers who train AI systems, an equally important group are the content moderators who keep social media platforms from becoming unusable cesspools of violence and abuse (or at least, that’s the theory).
Content moderators for Meta (Facebook, Instagram), TikTok, and other platforms spend their days reviewing flagged content and deciding whether it violates platform policies. This work protects billions of users from the worst of the internet. It is also deeply traumatising.
Meta has outsourced this work in Africa primarily through Sama (until 2023) and then Majorel (now part of Teleperformance). TikTok uses Majorel and other contractors. The workers are employed by these intermediary companies, not by Meta or TikTok directly, which allows the tech giants to maintain distance from the conditions in which the work is performed.
Daniel Motaung, a South African who worked as a Facebook content moderator for Sama in Nairobi, became a key whistleblower. He described being exposed to graphic violence from his first day on the job: the first video he moderated was a beheading. He developed severe PTSD. When he tried to organise his colleagues into a union to fight for better conditions, he was fired.
In May 2022, Motaung sued Meta and Sama in Kenyan courts, alleging exploitation, union busting, and wage theft. Meta argued that Kenyan courts had no jurisdiction over an American company. In February 2023, a Kenyan judge ruled that Meta could indeed be sued in Kenya. Meta appealed; in September 2024, the Kenyan Court of Appeal ruled against Meta, allowing the case to proceed to trial.
A 2024 report found that more than 140 of the Kenyan content moderators involved in the lawsuit had been diagnosed with severe PTSD, with some also developing general anxiety disorder and major depressive disorder. In 2025, content moderators in Ghana also sued Meta and Teleperformance over similar allegations of psychological harm.
The Human Cost
Workers report a devastating range of mental health impacts. Many develop PTSD, experiencing recurring nightmares, flashbacks, and intrusive thoughts related to the graphic content they’ve reviewed. Depression and anxiety are widespread, with persistent feelings of hopelessness and worry that interfere with daily functioning. Panic attacks are common, as is insomnia caused by disturbing images and thoughts that make sleep impossible.
A 2025 survey by Equidem of 76 workers from Colombia, Ghana, and Kenya reported 60 independent incidents of psychological harm, including anxiety, depression, irritability, panic attacks, PTSD, and substance dependence. One former content moderator reported reading up to 700 sexually explicit and violent pieces of text per day, with the psychological toll causing him to lose his family.
Workers are often bound by strict Non-Disclosure Agreements (NDAs), legally prevented from speaking about what they see or how it affects them. This enforced silence compounds the trauma and prevents workers from seeking appropriate support.
The Gig Economy and AI Labour
The AI labour supply chain has industrialised exploitation through the gig economy model. Companies like Scale AI don’t employ workers directly; they operate platforms that connect “taskers” with tasks. This structure has profound consequences for workers.
The model obscures responsibility at every level. When workers are harmed, it’s unclear who bears accountability: the AI company commissioning the work, the platform facilitating it, or the local subcontractor nominally employing the workers? This diffusion of responsibility helps all parties avoid accountability while harm continues unaddressed.
The platform structure also prevents organising. Workers are isolated, working from home or scattered across different locations. They often don’t know who else is doing the same work or even which company their labour ultimately serves, making collective action extraordinarily difficult. Meanwhile, the model enables geographic arbitrage: companies can shift work to wherever labour is cheapest and protections weakest. When workers in Kenya began organising, Remotasks simply left the country.
Perhaps most damaging is the lack of job security this model creates. Work is unpredictable: available one week, gone the next. Workers cannot plan their lives or finances around income that might disappear without warning. As researchers at the Brookings Institution noted, “platforms often do not provide clear dispute mechanisms that workers can use to elevate their concerns. Workers also frequently do not know which systems their work will train or build.”
What Has Changed Since 2023?
When I wrote the original post on human labour in 2023, this was still an emerging story. The TIME investigation had just been published. The lawsuits hadn’t been filed. The union hadn’t been formed.
In the years since, several things have changed. The human cost of AI is now part of public discourse. Books like Feeding the Machine have reached general audiences, and major news outlets regularly cover AI labour exploitation. What was once a niche concern has become a recognised ethical issue.
Worker organising has transformed the landscape. The African Content Moderators Union and the Global Trade Union Alliance represent unprecedented collective action by AI workers. Workers who were once isolated and voiceless are now connected across borders and demanding change together.
Legal precedents are being established. Kenyan courts have ruled that Big Tech companies can be held accountable in the countries where they outsource work, a decision with implications for jurisdictions around the world. Regulatory attention is also increasing: the EU AI Act includes provisions requiring transparency about data workers in AI supply chains, while the EU Platform Work Directive (2024) aims to improve working conditions on gig platforms. Meanwhile, the Fairwork project at Oxford University now regularly audits AI labour platforms, creating accountability through public ratings.
Yet despite increased attention, the fundamental business model hasn’t changed. Companies continue to outsource AI labour to the cheapest markets. Workers continue to be paid poverty wages for traumatic work. When pressure increases in one location, companies simply move operations elsewhere.
The situation remains urgent. As AI systems become more sophisticated, they require more human labour to train and maintain, not less. The demand for data labelling, content moderation, and RLHF work is growing. Without structural change, exploitation will only intensify.

Fighting Back: Unions and Legal Action
On May 1, 2023 – International Workers’ Day – more than 150 content moderators gathered in Nairobi and voted to establish the first African Content Moderators Union. Workers from Meta, TikTok, OpenAI, and other platforms joined together in an act of historic defiance against Big Tech.
Daniel Motaung, the whistleblower who had sparked the movement, said: “I never thought, when I started the Alliance in 2019, we would be here today – with moderators from every major social media giant forming the first African moderators union. There have never been more of us. Our cause is right, our way is just, and we shall prevail.”
Richard Mathenge, the former ChatGPT moderator who led RLHF teams for OpenAI, was named among TIME’s 100 most influential people in AI. He described the union’s formation as “an historic step” for workers “powering the AI revolution.”
James Oyange, a former TikTok moderator, emphasised that the problems are systemic: “People should know that it isn’t just Meta. At every social media firm there are workers who have been brutalised and exploited.”
The movement has now gone global. In April 2025, content moderators launched the first-ever Global Trade Union Alliance of Content Moderators in Nairobi, bringing together workers from nine countries to fight for living wages, safe working conditions, and union representation.
“Companies like Facebook and TikTok can’t keep hiding behind outsourcing to duck responsibility for the harm they help create”, said Christy Hoffman, General Secretary of UNI Global Union. “This work can—and must—be safer and sustainable. That means living wages, long-term employment contracts, humane production standards and a real voice for workers.”
The alliance is demanding direct employment by tech companies rather than outsourced contracts; mental health support and safe working condition; higher wages that reflect the skill and importance of the work; and the right to organise without retaliation.
Teaching Human Labour in AI
In the original 2023 collection, each article ended with a selection of ideas for teaching the issue in the context of existing curriculum areas. These 2026 updates similarly align concepts from the articles to standards from typical curricula across the world, and in particular the Australian, UK, US, and IB curricula. For the readers teaching in Higher Education, these examples will also be suitable across a wide range of disciplines.
My key point remains: we do not need specialised “AI literacy” classes to deliver quality instruction on AI ethics: we already have the expertise we need in schools and universities.
English
English curricula require students to analyse texts critically, examine voice and perspective, and understand the relationship between language and power. This provides an excellent foundation for exploring questions like “Whose voices are missing from the AI narrative?”, “How does the language of ‘automation’ and ‘intelligence’ obscure human labour?”, or “What persuasive techniques do tech companies use to deflect responsibility?” Students might analyse corporate press releases, compare them with investigative journalism, or write from the perspective of an AI worker.
Geography
Geography examines globalisation, development, inequality, and the relationship between wealthy and developing nations. AI labour provides a powerful contemporary case study for questions like “How does AI reproduce colonial patterns of extraction?”, “What are the push and pull factors that make certain countries attractive for AI labour outsourcing?”, or “How does geographic arbitrage enable exploitation?” Students could map the AI supply chain or compare working conditions across different countries.
Legal Studies
Legal Studies curricula examine rights, responsibilities, jurisdiction, and the relationship between law and justice. The Meta/Kenya lawsuits provide excellent case material for questions like “Can multinational corporations be held accountable in the countries where they cause harm?”, “How do labour laws apply to gig workers?”, or “What legal protections should exist for AI workers?” Students could examine the jurisdictional arguments in the Motaung case or compare labour protections across different legal systems.
Business Studies / Economics
These subjects examine business models, supply chains, labour markets, and corporate responsibility. AI labour raises questions like “Is the gig economy model sustainable?”, “How do tech companies externalise costs?”, or “What is the true cost of AI?” Students could analyse Scale AI’s business model, examine the economics of data labelling, or debate corporate social responsibility in the AI industry.
Psychology / Health
Psychology and Health curricula examine mental health, trauma, and wellbeing. Content moderation provides stark examples for questions like “What are the psychological effects of repeated exposure to traumatic content?”, “How should employers protect worker mental health?”, or “What support systems should be available for workers in harmful occupations?” Students could research PTSD and secondary trauma, or examine the adequacy of mental health provisions for content moderators.
Digital Technologies / Computer Science
Digital Technologies curricula examine how systems are designed and their impacts on users and society. This provides natural connections to questions like “What human labour is hidden behind AI systems?”, “How could AI be developed more ethically?”, or “What trade-offs exist between AI capability and worker welfare?” Students could audit AI systems they use to identify where human labour might be involved, or design proposals for more ethical AI development practices.
Theory of Knowledge (IB)
TOK examines how we know what we know and the relationship between knowledge and ethics. AI labour raises profound epistemological and ethical questions: “Can knowledge production be separated from its conditions of production?”, “What ethical obligations do knowledge-users have to knowledge-producers?”, or “How does the invisibility of AI labour shape our understanding of AI?” These questions are ideal for TOK essays and presentations.
History
History curricula examine change over time, causation, and the impacts of technological change. AI labour can be connected to broader patterns of labour exploitation through questions like “How does AI labour compare to other historical forms of outsourced and exploited work?”, “What can historical labour movements teach us about current struggles?”, or “How have previous technological revolutions affected workers?” Students could draw parallels to textile industry exploitation, the Fairtrade movement, or historical union organising.
Civics and Citizenship
Civics curricula examine rights, responsibilities, democratic participation, and global citizenship. AI labour raises important civic questions: “What responsibilities do consumers have for the conditions in which products are made?”, “How should democratic societies regulate AI labour?”, or “What role can citizens play in demanding ethical AI?” Students could examine proposed regulations, write to elected representatives, or debate policy interventions.
Note on the images: In the 2023 version of Teaching AI Ethics I generated images in Midjourney. This time around, I have sourced images from Better Images of AI. I still use AI image generators, but due to the environmental concerns and the contentious copyright issues discussed in these articles, I am more conscious of my use. Better Images of AI includes an excellent range of photos, illustrations, and digital artworks which have been generously licensed for commercial and non-commercial use.
Cover image for this article: Max Gruber / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
All articles in this series are released under a CC BY NC SA 4.0 license.
Subscribe to the mailing list for updates:
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply