Teaching AI Ethics: Power

This is the final post in a nine-part series exploring AI ethics, originally outlined in this post. Each post goes into detail on the ethical concern and provides practical ways to discuss these issues in a variety of subject areas. Here’s a full list of the rest of the series:

  1. Bias and discrimination
  2. Environmental concerns
  3. Truth and academic integrity
  4. Copyright
  5. Privacy
  6. Datafication
  7. Emotion recognition
  8. Human labour

This post on Power has been a long time coming. It is, in my opinion, the most complex of the ethical issues I’ve discussed in this series.

Unlike the previous posts, which dealt with a single ethical issue each, the issue of power is really a combination of factors. In this post, I’ll explore how the ethical concerns I’ve discussed throughout this series coalesce to reinforce and perpetuate societal power structures, and how AI might contribute to an uneven distribution of wealth, freedom, and power.

Here’s the original PDF infographic which covers all nine areas:

Understanding hegemony

Hegemony, a term popularised by Italian Marxist philosopher Antonio Gramsci, refers to the dominance of one group over others in society. Hegemony often manifests through the perpetuation of cultural norms, beliefs, and values that serve the interests of the dominant group in a society. It establishes a status quo that seems ‘natural’ or ‘inevitable,’ but in reality, it’s an intricately designed system that advantages some while disadvantaging others.

Hegemonic structures are interwoven into the fabric of society, influencing all aspects of life, including politics, economy, and social norms. They are perpetuated by a subtle and often unrecognised form of coercion that leads people to accept, adopt, and even perpetuate dominant ideologies, even when these may work against their best interests.

When we look at AI through the lens of hegemony, we can start to see how these powerful technologies can be deployed to maintain and reinforce hegemonic structures. From perpetuating bias, exacerbating environmental inequities, and manipulating ‘truth,’ to encroaching on privacy, commodifying data, and influencing human labour markets — AI can, and often does, contribute to these systemic disparities.

The entire series of AI Ethics posts can now be found here

Connecting the dots: bias, environment, human labour and datafication

The issue of power and control in AI runs deep. In the first post in this series, I spoke about bias and discrimination. Because of the way AI models are constructed, they are often biased towards a particular “worldview”, or disenfranchise already marginalised communities. Take, for example, the structure of a Large Language Model like GPT. The huge dataset contains billions of pages scraped from the web, but the vast majority of the text is in the English language. That content is further biased by the way the data is “crawled” and absorbed into the models. In the words of Emily Bender, Timnit Gebru, and the other authors of the now-famous “Stochastic Parrots” article:

In all cases, the voices of people most likely to hew to
a hegemonic viewpoint are also more likely to be retained. In the
case of US and UK English, this means that white supremacist and
misogynistic, ageist, etc. views are overrepresented in the training
data, not only exceeding their prevalence in the general population
but also setting up models trained on these datasets to further
amplify biases and harms.

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

Other research has demonstrated that the biases in Artificial Intelligence can particularly discriminate against young, non-white males; that predictive policing algorithms and AI used in the courts can unfairly target black people; and that even attempting to filter or remove bias can inadvertently compound the issue. Companies like OpenAI have been found to use low-paid human labour in countries like Kenya to manually classify and filter toxic and discriminatory data, in yet another example of a powerful, Western company profiting from the labour of poorer communities.

What all this means, is that powerful AI across a range of applications from language models to facial recognition and the systems we use to collect data in education can not only reflect but actually reinforce harmful stereotypes and biases.

Even the infrastructure of these systems entrenches existing societal inequalities. When I wrote about the environmental impact of Artificial Intelligence I focused on the carbon footprint of training models and the extractive mining processes needed to produce and power the hardware AI is built from. But, as Bender, Gebru, and their colleagues also pointed out, the environmental impact of AI particularly affects countries already suffering the effects of the climate crisis:

These models are being developed at a time when unprecedented environmental changes are being witnessed around the
world…It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access
to resources — both of which disproportionately affect people who
are already in marginalized positions.

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

Again, I want to stress the interconnectedness of these problems.

AI systems built on large datasets – whether of text, image, population data, or other – can entrench systemic biases. Those systems are built from technologies which contribute to global environmental issues that disproportionately affect poorer countries and already marginalised communities.

And to bring it back to our field – education – the manner in which all of that data is collected and processed, or the datafication of students, compounds these issues further. In a recent blog post Radhika Gorur, Joyeeta Dey, Moira V. Faul and Nelli Piattoeva comment on the dilemma of “decolonising” data in education. Though the article is about EdTech, the discussion applies to the field of AI as the engine which will drive many of the education technologies already present in classrooms across the world.

The authors argue that we urgently need to scrutinise the philosophies and principles underpinning these education technologies and consider how to promote the ethical use of data, especially in the global south. The collection of data on students, and by extension the use of AI in applications offering “personalised learning“, overlooks the diverse cultural, spiritual, and epistemological realities of different communities.

The article raises critical questions about whether international comparative assessments are suitable for all nations, and why we should challenge the ubiquity and apparent inevitability of EdTech.

What about pauses and open letters?

If you’ve been keeping track of the media coverage around Artificial Intelligence you will have no doubt seen the open letter calling for a pause to AI development, and the subsequent outcry from various fathers, godfathers, grandfathers, brothers, and uncles of AI like Geoffrey Hinton and Yoshua Bengio. Even OpenAI’s CEO Sam Altman practically begged the US senate to regulate his industry, alongside “AI critic” (and founder.of several very successful AI companies) Gary Marcus. Marcus and Altman have even offered to help lead those efforts.

It would be easy to look at these examples of alarm from respected industry experts and wonder if we haven’t just ushered in the end of the world. On the other hand, if you dig around a little, it starts to seem like the apocalyptic hype serves some of these individuals and companies pretty well.

First of all, the “pause” was broadly condemned as both an unsustainable and unrealistic option, and as a self-interested attempt by some to slow down the pace of development to allow their own companies to catch up. It was shot down as a publicity stunt, and largely ignored in the industry. Geoffrey Hinton’s claims have also been called into question. His former Google colleagues, most notably co-author of the stochastic parrots article mentioned earlier, Timnit Gebru, have criticised his failure to support them when they were fired (or left willingly, depending on who you ask) Google’s ethics team.

Both the pause and the prophecies of doom from experts such as Hinton have been labelled a distraction from the real issues of AI which could be dealt with right now. Foremost amongst these issues is the distribution of power and the marginalisation of at-risk groups.

For our students, there are many reasons why we should be concerned about AI, and most of those reasons are much more down-to-earth than “because it will destroy the world”. As power and wealth continues to be centralised in the hands of leading companies, we need to question the impact AI will have on the workforce and the future lives of our students. Although AI could bring productivity gains, it remains to be seen whether those benefits will be passed down to workers or used to make the rich richer.

Much like social media and “influencer” work – often attractive to young people who use the platforms – Artificial Intelligence is also built on a lot of “free labour”. Just as Facebook and Twitter profit from users contributing hours of unpaid time and their creative and intellectual property, AI is built on data that was never paid for, and trained for free by every person using the platforms.

Rather than subscribing to the end of the world narrative, we need to talk to students about their rights and responsibilities as they grow and get ready to leave school; otherwise, they’re at risk of becoming just more unpaid workers in the powerful AI machine.

What can be done about it?

I want to end this post – and this series – on a hopeful note. There’s no denying that Generative AI and related technologies have the potential to positively impact the education system. The same could easily be said about any technology, from writing tools to word processors, PCs to smartphones. The issue – as with all of these technologies – is not whether they can improve learning, but how they are used.

If all we use AI for is efficiency, then we’re heading towards EdTech v2.0. Over the past couple of decades we’ve seen wave after wave of technology that promised enormous gains to learning, but delivered very little. We’ve heard the phrase “this will revolutionise education” again and again, and in general the education system – and its flaws – has proven to be extremely robust. We are now starting to see the potential negative impact of AI technologies, including from datafication, predictive profiling, and the potential for generative AI to perpetuate bias.

So, to counteract the entrenchment of existing power structures, and the centralising of wealth in the hands of the already wealthy, seems like a huge challenge and not something that can be tackled by educators.

But Artificial Intelligence isn’t EdTech. It’s not an app or a piece of equipment, or even a single system. It is a complex infrastructure that will ultimately be woven through all of the technologies we already use, and those on the horizon. And importantly, it’s not quite fully established in education yet, which gives us an opportunity for critique. This series of posts aimed to support that critique by engaging students and educators across different curriculum areas in meaningful discussions about AI and the future of education.

Here are five final practical ideas for Teaching AI Ethics:

  1. Develop Clear Policy: As a school leader, it’s important to acknowledge the ethical complexities of AI and develop a comprehensive policy that strives to mitigate these issues. Here’s how you can approach it:
    • Start with an open discussion among staff, students, and community to understand their concerns about AI. Use these discussions to identify key issues that your policy should address.
    • Clearly outline the roles and responsibilities of all stakeholders in adhering to the policy.
    • Define what AI tools are acceptable within your institution and under what circumstances. This can include specifics about data collection, use, and storage.
    • Regularly review and update the policy as new AI technologies and ethical challenges arise.
  2. Data Privacy Activity: Understanding the nuances of privacy policies can be challenging, even for adults. Here’s a practical way to involve students:
    • Organise a workshop where students dissect the privacy policies of common AI platforms they use. Guide them to understand how their data is collected, used, and protected.
    • As a follow-up activity, students could create ‘ideal’ privacy policies for an AI product, incorporating the most ethically robust components of the policies they’ve studied.
  3. Understanding the Human Labour Behind AI: Many AI models rely heavily on human input and labour, often from individuals in developing countries who are paid low wages to do time-consuming and sometimes distressing work. This often-invisible labor is used in training these models and cleaning up their outputs. Here’s a possible approach to incorporate this issue into the curriculum:
    • Initiate a study and discussion on the topic of “free labour” in AI. This should include research on the lives and working conditions of the data annotators whose work powers many AI models, with a specific focus on the exploitative labor practices mentioned in the aforementioned article.
    • In conjunction with this, students could research and discuss the notion of “data colonialism” and how the data of individuals is used (often without their knowledge or consent) to train AI models, benefiting companies but often not the individuals themselves.
    • To make this issue more tangible, students could examine the data they generate in their own lives and discuss how it might be used to train AI models. This could involve looking at their own digital footprints, understanding what kind of data they generate, and discussing how this data could be used in AI training.
    • Lastly, encourage students to come up with ideas for regulations or systems that could make the process more equitable, such as ways to provide transparency about how data is used, or methods to share the revenue generated from this data with the people who produced it.
  4. Interrogating AI Platforms: Understanding AI ethics policies can help students and educators make informed decisions about which platforms to use and how to use them ethically.
    • Assign students to research the Responsible AI and Ethical AI policies of various AI platforms and services.
    • Ask them to present a report on their findings, discussing what these policies mean, how they could be improved, and the potential implications for users.
    • Incorporate these findings into a larger discussion on AI ethics in the classroom, allowing students to understand the real-world implications of these policies.
  5. AI for Good Project: Deliberately focusing on the potential positive applications of AI can raise important ethical questions about current uses of the technology.
    • Ask students to design an AI solution for a societal problem they care about. This could be a local issue (e.g., reducing traffic in their town) or a global one (e.g., predicting natural disasters).
    • As part of the project, students should also conduct an Ethical Impact Assessment. This assessment should detail potential ethical issues related to data privacy, bias, and societal impact, and propose strategies to mitigate these issues.
    • Through this project, students will not only gain practical experience in AI but also learn about the ethical considerations that are crucial to responsible AI development and use.

That’s it! This series has unfolded over several months, and includes dozens of lesson activities across a wide range of curriculum areas. Make sure to check out the other posts in the series. To stay up to date on future posts, join the mailing list:

Processing…
Success! You're on the list.

I regularly run professional development, consultancy, and advisory services for literacy and Artificial Intelligence. Got a comment, question, or feedback? Get in touch:

3 responses to “Teaching AI Ethics: Power”

  1. […] Teaching AI Ethics: Power – Leon Furze […]

  2. […] in turn create the “folklore of the future”, Brevini highlights how such discourses become “hegemonic”: reinforcing existing power structures and dominant social […]

Leave a Reply

Discover more from Leon Furze

Subscribe now to keep reading and get access to the full archive.

Continue reading