Using Metaphors to Teach Critical AI Literacy

beige analog compass

For the past few years, there has been a growing need to question and critique the language used to describe artificial intelligence systems like ChatGPT.

The technology companies seem to revert to magical, mystical language which distracts from the realities of what the technology is, and what it can do. On the flip side, many in the academic community use metaphors like stochastic parrots and bullshit machines to critique the fact that, as I once said (on a sticker!), “chatbots don’t make sense, they make words.”

sticker showing a chatbot vomiting the words "chatbots don't make sense, they make words" on a yellow notebook

In a new paper from Jasper Roe, Mike Perkins and me, we explore how metaphors can be used as a vehicle for Critical AI Literacy (CAIL). We also define CAIL, and explore four possible lesson activities.

Here’s the full abstract from the paper, Reflecting Reality, Amplifying Bias? Using Metaphors to Teach Critical AI Literacy published in Journal of Interactive Media in Education (JIME):

Abstract

As educational institutions grapple with questions about increasingly complex Artificial Intelligence (AI) systems, finding effective methods for explaining these technologies and their societal implications to students remains a major challenge. This study proposes a methodological approach utilising Conceptual Metaphor Theory (CMT) and UNESCO’s AI competency framework to develop activities to foster Critical AI Literacy (CAIL). Through a systematic analysis of metaphors commonly used to describe AI systems, we develop criteria for selecting pedagogically appropriate metaphors and demonstrate their alignment with established AI literacy competencies, as well as UNESCO’s AI competency framework.

Our method identifies and suggests four key metaphors for teaching CAIL. This includes AI as a funhouse mirror, a map, an echo chamber, and a black box. Each of these metaphors seeks to address specific characteristics of GenAI systems, from filter bubbles to algorithmic opacity. We present these metaphors alongside pedagogical activities designed to engage students in experiential learning of these concepts. In doing so, we offer educators a structured approach to teaching CAIL that touches on aspects of technical understanding and provokes questions about societal implications. This work contributes to the growing field of AI and education by demonstrating how carefully selected metaphors can make complex technological concepts more accessible while promoting CAIL.

https://jime.open.ac.uk/articles/10.5334/jime.961

In the paper, we define Critical AI Literacy as follows:

The ability to critically analyse and engage with AI systems by understanding their technical foundations, societal implications, and embedded power structures, while recognising their limitations, potential biases, and broader social, environmental, and economic impacts.

We then explore some common metaphors alongside a few of our own invention, and discuss how they might be aligned to the UNESCO AI Curriculum Goals.

Extract from Table 2: Initial List of Potential Metaphors to Guide AI Literacy.
Extract from Table 3: Selected Metaphors, Selection Criteria and Alignment to UNESCO AI Competency Framework.

Finally, we take four of the selected metaphors and develop lesson ideas including activities and discussions. Here is the full lesson activity for the metaphor “AI as Map”:

Activity 3: AI as a Map – Representation, Power and Bias

Learning Objectives

  1. Analyse how AI’s representation of knowledge parallels historical maps, focusing on inclusivity, bias, and power.
  2. Recognise limitations in AI’s representation of knowledge.
  3. Critique the metaphor of AI as a map to uncover insights into technology’s impact on perception and inclusivity.

Addresses Curriculum Goal

CG2.1.1: Surface ethical controversies through a critical examination of the use cases of AI tools in education.

Introducing the Metaphor

The instructor can begin by discussing historical maps, such as the Mercator projection, which often distorts size and centralises Western countries, as a springboard to AI’s role in shaping perspectives. Students could explore questions such as what parts of reality are emphasised or minimised in a map, and why? Who decides what is “on the map”, and how does this affect our understanding of the world?

The instructor can introduce AI as a map, illustrating that AI “maps” knowledge through data selection, categorisation, and emphasis, with similar power dynamics shaping what is visible or hidden. Reflect on the statement “the map is not the territory”, exploring how the representation of the world based on the Large Language Model dataset is not a true reflection.

Learning Activity: Mapping AI’s Knowledge Terrain

In small groups, students will choose an area of knowledge (e.g. cultural heritage, health information, and social trends) and map it from the perspective of an AI tool. They should analyse how AI represents this area, noting:

  • What information is readily accessible online that can be “mapped” by data scraping?
  • What perspectives or voices are missing or minimised from the data?
  • What biases or assumptions are present in AI output? (in terms of the metaphor, how is the “map” different to the “territory” or reality?

Compare and Contrast “Maps”: Groups can then compare their findings by examining differences in representation and potential biases.

Discussion Questions

In analysing AI’s ‘map’ of knowledge, students observe which perspectives are amplified and which are neglected, noting that AI often prioritises dominant narratives while overlooking marginalised voices. This selective representation shapes our perception of knowledge, as AI’s outputs reflect the biases and gaps inherent in its training data. The metaphor of ‘AI as a Map’ highlights AI’s impact on knowledge and power by revealing how certain viewpoints are centred while others are diminished, much like historical maps that emphasise the perspectives of those in control. However, this metaphor has limitations, as it may imply a static view of knowledge rather than AI’s dynamic interaction with evolving data. Understanding these dynamics encourages a more responsible approach to AI use, prompting users to critically assess an AI’s outputs and recognise where broader, more inclusive perspectives are needed.

Read the full paper

The paper is Open Access and available in full in the new special edition of JIME. You’ll find the paper alongside a commentary from Emily M. Bender on the famous ‘Stochastic Parrots’ metaphor, as well as a collection of excellent articles exploring the conceptual and practical implications of metaphors for the critical teaching of AI.

Read the full paper here.

Access the entire special edition here.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Go back

Your message has been sent

Warning
Warning
Warning
Warning
Warning.

2 responses to “Using Metaphors to Teach Critical AI Literacy”

  1. […] to be on a roll with papers getting published recently. In the recent weeks, we’ve had our critical AI literacy paper published in JIME, a paper on digital plastics as a conceptual metaphor in the journal Pedagogies, and last week, a […]

Leave a Reply