Myths, magic, and metaphors: the language of generative AI

As part of my PhD studies, I read and write a lot of stuff that doesn’t really fit into my research, but which I find interesting anyway. I’m categorising these “spare parts” on my blog, and if you’re interested in following them you’ll find them all here.

I’ve written a fair bit about AI ethics, including the bias and discrimination in language models, the environmental impact of training AI, and the ongoing issues over copyright and intellectual property. The field is an ethical nightmare, and I usually lead my sessions on AI with some pretty strong caveats about these issues.

But there’s another aspect of AI that I’m particularly interested in, and that’s the language we use to describe these systems. I think language goes hand in hand with ethics, because the words we use to describe AI have the power to either distract us or draw our attention to the complexities of the technology.

In an earlier post, I wrote about what could happen if AI recedes into the woodwork, and I discussed how the language of AI is already starting to shift from the “magical” and “superhuman” towards the more mundane and neutral. In this post, I’m digging in to the myth, magic, and metaphors of AI to explore the power of language: the “mythologising” of AI narratives; the conflation of Artificial Intelligence with magic, religion, and the sublime; and the ways we anthropomorphise the technology.

The live webinar Practical Strategies for Image Generation in Education is now available as a recording. Click here to purchase the recorded webinar.

AI mythologies

There are a few meanings to the word “myth”: myths can be stories which help explain everyday phenomena, or widely held but false beliefs. In the discourse surrounding AI, myths function in both ways: to define the history of the field and as a series of convenient but ultimately untrue ideas.

There are several pervasive myths in the field of Artificial Intelligence, some of which have infiltrated even high-level policies and regulations. Benedetta Brevini, in her chapter for the 2021 open access book AI for everyone?, points to three of these myths in the High-Level Expert Panel report on AI from the European Council:

1. That AI is a solution to humanity’s (and capitalism’s) greatest challenges

2. That AI is inevitable and inescapable

3. That AI can and will surpass human intelligence

These three myths, present throughout the European report, serve to “control political debates,” including in ways which benefit the corporations who own the technologies. Drawing on Gramsci’s assertion that myths can become “common-sense”, and in turn create the “folklore of the future”, Brevini highlights how such discourses become “hegemonic”: reinforcing existing power structures and dominant social narratives.

Vincent Mosco (who I wrote about in the previous post) calls these myths the “storylines of our time”, and Rainer Rehak, in another chapter of AI for everyone?, points out that this mythologised discourse also leads to the use of technology not because of its actual capabilities, but because of its “assumed functionality”: the tech doesn’t necessarily do what we want it to, but we buy into the dream of efficiency and productivity.

I listened to a podcast recently where Monash’s Neil Selwyn interviewed Audrey Watters, the “Cassandra” of edtech who has moved on from her critique of educational technologies to focus on the algorithms that power fitness apps. Watters pointed out that most digital technologies – the apps and services we use every day – are pretty rubbish.

Think about how frustrated you get when your phone apps crash, or internet banking doesn’t work, or you have any interaction at all with ChatGPT’s woeful interface. And yet, we persist with these technologies because of the myths we tell ourselves, their “assumed functionality” and the threat of being branded a Luddite if we reject them.

Magic and the “AI Sublime”

Any sufficiently advanced technology is indistinguishable from magic.

Arthur C. Clarke, 1973

Reinforcing the mythologising of AI, Artificial Intelligence is often referred to in language evoking magical, religious, or “sublime” qualities. Arthur C. Clarke’s much quoted “third law” from the 1973 revision of Profiles of the Future seems to be borne out by the magical narratives of AI. But the idea of AI’s “divine hand” (Brevini, 2021) serves only to obscure the ethical concerns that plague AI.

Alexander Campolo and Kate Crawford call this combination of algorithmic certainty and sublime discourse “enchanted determinism”. They compare the language of Artificial Intelligence to words used to describe alchemy and other magic, interrogating the scope of this discourse as it ranges from public and media perceptions through to experts in the field. It’s a great article to read if you’re interested (like I am) in the ways technology companies conflate their products with magic, and why.

As AI like deep learning systems pervade many areas of our lives, both reflecting and shaping the world, Campolo and Crawford argue the “magic” narrative distracts us from the accountability of those organisations responsible for AI’s creation.

Carrie O’Connell and Chad Van de Wiele explore some of the religious associations with the technology, comparing AI worship to religious fervour and the drive to replace the “ontological infinitude” (Scone, 2019) of God with data and information and the promise of “transubstantiation” for digital believers. O’Connell and Van de Wiele even take this line a step further, suggesting that in the process of creating unknowable black boxes of data, God has been replaced by algorithms.

The comparison of technology to religion (and vice versa) is nothing new. In a recent article for The Conversation, Rhona Trauvitch from The University of Florida explored some of the comparisons between religious interpretations of meaning and our understanding of the language of code. From the notion in Kabbalah that language is the building block of creation, to the Jewish Golem, reliant on “the word”, we have a long history of conflating the machine and the divine, and creating meaning where before there was none. In a way, our current obsession with “intelligent” chatbots is more of the same.

Processing…
Success! You're on the list.

Metaphors, anthropomorphism, and the importance of language

Finally, the language used to describe AI, its processes, and the ways in which it interacts with the human world have an important impact on how we perceive the technology. Rehak argues “powerful metaphors” of AI perpetuate the deterministic myths of the technology. Even the term “Artificial Intelligence” itself is problematic, as the technology is neither artificial nor intelligent.

Referring to the machine learning algorithms and datasets which underpin them as “artificial” distances the output from human responsibility, complicating the ethical debates about environmental impact, accountability, privacy, and the use human labour in the labelling of data.

Here’s what Emily Bender, one of the authors of the now seminal AI article On the dangers of stochastic parrots, has to say about it:

In fact this is a marketing term. It’s a way to make certain kinds of automation sound sophisticated, powerful, or magical and as such it’s a way to dodge accountability by making the machines sound like autonomous thinking entities rather than tools that are created and used by people and companies.

Emily Bender, opening remarks on “AI in the workplace…”

The word “intelligence” is just as contentious, given we still don’t have a clear definition of the human intelligence to which machine intelligence is compared. Right now, it’s fashionable to label AI not only as intelligent but also “creative”, with research like this testing GPT against narrow measures of creativity to point out its dominance over humans.

We also anthropomorphise AI endlessly, giving it human traits to fill in the gaps in our language to really describe what’s going on with these processes. Words like “‘recognition’, ‘learning’, ‘acting’, ‘deciding’, ‘remembering’, and ‘understanding’” are distinctly more “human” than the abstract language typically used in fields related to AI like mathematics and computer science. Mark Barnett also challenges terms like “hallucination” and “temperature” as both misleading and unhelpful in education when talking about these technologies.

It’s important to remember that this form of Artificial Intelligence is simply an algorithm designed to find patterns in large datasets. My analogy of the iceberg: a language model we can see and interact with, sitting atop an invisible below-the-waterline dataset, steps away from anthropomorphic imagery, but it’s still an analogy, and all analogies are oversimplifications.

So what?

So the big question is, who cares? Is the language we use to talk about AI really that important? Well. Companies like Apple certainly think so, given they’re steering well clear of any language related to AI and the current hype in favour of more neutral terms like “machine learning”.

I’ll end on a quote from every educator’s favourite social constructivist, Lev Vygotsky:

A word devoid of thought is a dead thing

Lev S. Vygotsky, Thought and Language

Chatbots don’t make sense, they make words. Artificial intelligence creates words devoid of thought – dead things to which we attribute meaning. But the words we use to talk about AI have power, so perhaps we should be a little more considerate of the myths, magic, and metaphors we use to describe the technology, and who that language serves.

Share or comment using the links at the end of this post.

If you’d like to talk about Generative AI, get in touch via the form below:

One response to “Myths, magic, and metaphors: the language of generative AI”

  1. Great piece, thanks!

Leave a Reply

%d bloggers like this: