AI Metaphors We Live By: The Language of Artificial Intelligence

In their seminal work “Metaphors We Live By”, George Lakoff and Mark Johnson* argue that metaphors aren’t just poetic or rhetorical flourishes, but fundamental elements of human thought and language. As we grapple (metaphor) with the implications of artificial intelligence (also a metaphor…), it’s worth examining the metaphors we use to describe and understand these complex systems. Our choice of language doesn’t just reflect our understanding – it actively shapes it.

Over the past year, I’ve written extensively about generative AI and its impact on education. Looking back on this body of work, I’ve realised how heavily I’ve relied on metaphors to explain complex concepts – despite calling out the issues with metaphorical language in an early post. These metaphors range from the obvious to the subtle, from extended analogies to deeply embedded conceptual frameworks that we might not even recognise as metaphorical.

In this post, I’ll categorise and analyse the key metaphors I’ve used to describe AI. I also uploaded a significant chunk of my blog into the Claude language model to see if I had missed anything with my meagre human eyes. By interrogating (problematic metaphor!) our own language, we can gain insights into how we conceptualise AI and perhaps identify limiting or problematic ways of thinking about this technology.

AI as a Physical Object or Entity

Many of our metaphors for AI treat it as a tangible, physical thing. This helps us grasp something inherently abstract and complex. I’m guilty of this in many ways, including:

The Black Box

One of the most common metaphors I’ve used is the idea of AI as a “black box”. This metaphor emphasises the opacity and inscrutability of AI systems. As I wrote in The AI Iceberg: Understanding ChatGPT, “large language models and related deep learning technologies are often referred to as black boxes because the connections and networks within them are so massively complex that no human or team of humans could possibly unravel everything going on inside the model.”

This metaphor is powerful because it captures both the complexity of AI systems and our limited ability to understand their inner workings. However, it can also be limiting if it leads us to view AI as completely unknowable or beyond analysis. In fact, there is excellent research trying to crack open the black box, including recent papers from Anthropic and OpenAI, both of which I touched on in a recent article on the importance of transparency for AI grading and assessment.

The Iceberg

Building on the black box metaphor, I’ve also compared AI to an iceberg. In the same article, I used this analogy to explain the visible and hidden aspects of large language models like ChatGPT. The visible “tip” represents the interface and outputs we interact with, while the vast hidden “underside” represents the complex architecture, training data, and processes that make the system work.

This metaphor is useful for emphasising that there’s much more to AI systems than what we directly interact with. It can help us remember the vast infrastructure and data that underpin seemingly simple chatbot interactions, and the fact that many of these datasets come from obscure, proprietary, or otherwise secretive origins.

AI as a Process or System

Another set of metaphors conceptualises AI not as a static object, but as a dynamic process or system. The suggestion of movement or dynamism might come in the form of machine-like terms which suggest automation, or anthropomorphised, personified terms which contribute to the (misleading) comparisons of AI to human intelligence.

The Pattern-Matching Machine

I’ve frequently described AI, particularly large language models, as “statistical pattern-matching machines”. In “Don’t use GenAI to grade student work“, I wrote: “GenAI produces outputs based on probabilistic patterns in its training data, without any real understanding, reasoning, or ability to make qualitative judgments.”

While the metaphor is useful for dispelling notions of AI as truly “intelligent” or “understanding” in a human sense, it is of course a simplification. It reminds us that despite their impressive outputs, these systems are fundamentally based on statistical correlations rather than genuine comprehension. On the other hand, the output of Generative AI appears to be much more sophisticated than simple pattern matching. Whether AI “thinks” or “reasons” (it doesn’t) might be less important than whether people think that it is reasoning, so my “pattern matching” metaphors might ultimately fall on stony ground.

The Virtuous (or Vicious) Cycle

When discussing the rapid development and adoption of AI, I’ve described it as a “virtuous cycle” for tech companies. In “Don’t use GenAI to grade student work“, I noted: “The exploding use of generative artificial intelligence, with both individual users and commercial entities, is producing a virtuous cycle for the development of these technologies.”

This metaphor highlights how the widespread use of AI creates a feedback loop, providing more data and resources for further development. It’s a “virtuous” cycle from the perspective of tech companies, but could be seen as a “vicious” cycle in terms of centralising power and data. I wrote more about this in an early article on the ways AI technologies reinforce hegemonies and existing power structures.

The suggestion of dynamism in the “cycle” metaphor adds to the overall narrative of AI as a technology which never stops. This can be helpful in explaining to people how rapidly the technology has developed, but it also contributes to harmful narratives like “if you don’t embrace AI, you’ll get left behind.”

The Journey

We often talk about AI development as if it’s moving along a path, rather than the reality which is a collection of breakthroughs in often disparate fields of research. I’ve written frequently about the “journey” of understanding AI, or conveyed it in similar terms. Ray Kurzweil, technologist and author of The Singularity is Near (2005) and more recently The Singularity is Nearer (2024) uses a similar idea in his articulation of the Law of Accelerating Returns.

Kurzweil, while not the originator of this specific concept, has written extensively about technological progress. His version of the “Law of Accelerating Returns” suggests that technological change is exponential rather than linear, and that technological progress can seem to suddenly accelerate.

The journey metaphor might also lend itself to a “historical smoothing” of the trajectory of these technologies, which itself runs the risk of technodeterminism when considering the future. It’s easy to think of the technologies we have now as “inevitable”, and extend that into the next five or ten years, but the iPhone wasn’t inevitable; Facebook wasn’t inevitable; Google wasn’t inevitable: all of these technologies came to be through complex economic, societal, political and technological advances, not because of a predetermined “journey”.

AI as a Force or Actor

Perhaps the most pervasive metaphors are those that personify or anthropomorphise AI, treating it as an actor with agency.

The Hornet’s Nest

I’ve often described AI as a force causing upheaval in education and other fields. In “The AI Assessment Scale: From no AI to full AI“, I wrote: “AI chatbots like OpenAI’s ChatGPT have kicked the hornet’s nest in education.” A metaphor like this can be useful for emphasising the significant changes AI is bringing about. But it’s important to remember that AI itself isn’t an actor with intentions – the disruption comes from how humans choose to develop and deploy these technologies. Importantly, OpenAI (itself just a metaphor for the people who run the company, the investors who back it, and the workers within it) is the actor doing the “kicking”: the technology, ChatGPT, is just the means to that ends.

You can access a free ebook on the AIAS with over 50 activities for the 5 levels by signing up for the mailing list here:

The Mirror and The Magnifying Glass

I’ve described AI as a mirror reflecting societal biases. In “Don’t use GenAI to grade student work“, I noted: “GenAI models are trained on vast datasets scraped from the internet, which can encode all sorts of societal biases and discrimination.” The mirror doesn’t provide an accurate, 1:1 reflection either: it’s more like a fairground hall of mirrors, amplifying and distorting certain images.

Related to the mirror metaphor, I’ve also described AI as a magnifying glass for societal issues. This metaphor suggests that AI not only reflects but amplifies and makes more visible existing societal problems and biases.

These metaphors are powerful for understanding how AI can perpetuate and amplify existing societal issues; however, we should be careful not to take it too far – AI systems don’t passively reflect bias, but actively propagate it through their operations. It is not, like a mirror, an inert surface reflecting an objective “truth”.

The Mouth

I have noticed in my own writing, and in the interviews with participants in my PhD studies, an inclination to refer to AI not just as human-like, but specifically with language related to mouths. This might be due to the positioning of Generative AI as primarily a chatbot: it’s natural to think of something which “chats” as having a “mouth”. But the metaphor often extends in interesting directions, including “feeding” data into LLMs, AI which “regurgitates” content, or getting AI to “chew on” an idea: all comment which have emerged from my research participants.

ChatGPT, particularly with its voice mode, “speaks”, and a chat thread is a “conversation”. I’ve pointed out in many professional learning sessions that the best way to engage with ChatGPT and similar models is through “dialogue” or, of course, a “chat”.

The positioning as AI via chatbots is interesting because it’s such a logical and seemingly benign way to interact with a powerful technology. The training and fine tuning of these models by the developers contributes to their “chatty” nature, with some being specifically programmed to respond in “friendly” or sociable ways. AI cannot chew, regurgitate, speak, or chat, since it has none of the required organs to do so. It’s worth asking whom the “chatbot” format really benefits. While it’s eminently possible to interact with Claude in code, for example, it doesn’t make for a particularly engaging, compelling, or addictive interaction.

The Wave

AI development is often described using water metaphors. In “Hands on with AI audio generation: GAI voice, music, and sound effects“, I wrote: “Although Generative AI isn’t a new technology, it’s definitely been having its moment in the sun recently.” This implies AI is like a wave, rising to prominence.

In conference keynotes I’ve also called a potential negative of AI generated content a “Tsunami of Slop”, positioning it as a natural disaster for the online ecosystem: I’ll return to that particular metaphor later.

AI as a Tool or Instrument

Many of our metaphors frame AI as a tool to be used, emphasising human agency in its application. My PhD supervisor Lucinda McKnight and Wiradjuri woman and educator Cara Shipp recently wrote an article problematising the word “tool” in AI contexts, arguing that arguing that this framing oversimplifies the complex nature and impacts of AI systems. They suggest that viewing AI merely as a neutral tool ignores its “embeddedness” in broader socio-technical systems, its potential for harm, and the power dynamics inherent in its development and use.

Here are a few of my conscious and unconscious “tool” metaphors.

The Double-Edged Sword

I’ve often described AI as a double-edged sword, emphasising both its potential benefits and risks. This metaphor is intended to encourage a balanced view of AI, recognising its capabilities while remaining aware of its dangers.

The Copilot

When discussing the role of AI in education, we suggested at level 5 of the AI Assessment Scale that it should be seen as a “copilot” (not the Microsoft Copilot application) rather than an autopilot. This metaphor suggests that AI should assist rather than replace human decision-making, maintaining the crucial role of human judgment and expertise, but itself is an anthropomorphism.

The Spell Checker and the Calculator

In critiquing the use of AI for grading, I’ve compared AI unfavourably to using a spell-checker to assess the aesthetic qualities of a poem. The metaphor highlights the limitations of AI in understanding nuance and context, especially in creative or subjective domains. It’s an obvious simplification, however, and I’ve written elsewhere that AI is not like a calculator, suggesting that I can’t actually make my mind up about what kind of tool AI is or isn’t (I am large, I contain multitudes).

AI as an Environmental Force

Some metaphors and analogies, including ones I’ve designed specifically to discuss AI, conceptualise the technology in terms of its broader impact on our digital and social environments.

Digital Plastic

In Digital plastic: Generative AI and the digital ecosystem, I compared generative AI to microplastics in the ocean, suggesting that AI will pollute the digital ecosystem in a similar way. The analogy highlights the potential long-term, pervasive effects of AI on our digital environment. I extended that original post recently with a followup explaining the implications of Digital Plastic now that we have cheap, easy-to-use video and audio generation models.

Slop as a Service

I used the term “Slop as a Service” at a 2024 AISNSW conference to describe the low-quality, AI-generated content flooding the internet – a particularly harmful form of “digital plastic”. Like the tsunami of slop mentioned earlier, I feel there’s a real risk of being overwhelmed by generated media which is low quality, low human input, and low value. The term “slop” itself has unknown origins, but was popularised by Simon Willison. I think it’s a great metaphor, with the mouth-feel of the word “spam” for email drivel, and similar connotations.

The Importance of Metaphor

These metaphors aren’t just linguistic flourishes – they fundamentally shape how we think about and interact with AI. The black box metaphor might lead us to view AI as unknowable and beyond control; for educators, this is dangerous territory since many of us already feel that technology is out of our influence. The pattern-matching machine metaphor could encourage a more critical view of AI’s capabilities, but it is also simplistic and reductive, belying the sophisticated capabilities of these models. While the mirror metaphor might spur us to examine the biases in our data and society, statements like “AI just reflects systemic bias” is used to avoid responsibility for building better systems.

As educators, policymakers, and users grappling with the implications of AI, we need to be aware of the metaphors we’re using and the conceptual frameworks they impose. Are we anthropomorphising AI in ways that obscure its limitations? Are we framing it as an unstoppable force, neglecting human agency in its development and deployment?

By examining and questioning our metaphors, we can hopefully develop a more nuanced and accurate understanding of AI, and pass that understanding on to our students. We can move beyond simplistic narratives of AI as either a panacea or a threat, towards a more realistic view of its capabilities, limitations, and implications.

Want to join in these complex, rich, mind-boggling conversations about what AI means for education?

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨


*For the images in this article, I took a photo of the front cover of this edition of Lakoff and Johnson’s Metaphors We Live By and used the coloured section as a ‘style reference’ in Adobe Firefly v3 (prompts embedded in the image metadata). I couldn’t find any attribution for the cover image, but the publisher is Chicago University Press.

4 responses to “AI Metaphors We Live By: The Language of Artificial Intelligence”

  1. […] how metaphors are what we use to think and communicate, to understand and experience the world. Leon Furze also recently wrote about this topic. Metaphors are not perfect, but they are the only way we have to make sense of things. My hope is […]

  2. […] parallel underscores something else I’ve written about a lot recently: the importance of developing clear, accessible language to discuss new technologies and their effects. As we work to understand the challenges and opportunities of AI-generated content, perhaps we too […]

  3. […] be more critical and analyse these technologies. Over the past couple of years, I’ve written quite a bit about AI and metaphors, examining the everyday language educators use to describe AI, and coming up with some of my own, […]

Leave a Reply