AI and the Art of Paper Folding

I love exploring the metaphors we use to talk about artificial intelligence, from the complex – like stochastic parrots and colonizing loudspeakers – to the banal, especially the way we anthropomorphise the technology. I think understanding the metaphors we use to describe artificial intelligence is both a good way to check the pulse of public opinion and a way to be more critical and analyse these technologies. Over the past couple of years, I’ve written quite a bit about AI and metaphors, examining the everyday language educators use to describe AI, and coming up with some of my own, including the idea of “digital plastic” to describe the utility of synthetic media.

In this post, I’m going to introduce a new metaphor: looking at artificial intelligence through the lens of origami, the art of paper folding. I’m sure most of you would have had some experience with origami, whether it’s folding paper airplanes to throw across the classroom or making paper cranes until your fingers bleed. The origami is centred on a fairly simple idea: take a single sheet of paper, and fold it into something new.

shallow focus photography of paper crane
Photo by Miguel Á. Padriñán on Pexels.com

In Todd May’s excellent introduction to Deleuze, he uses an analogy of origami to simplify a fairly complex philosophical idea, the idea of immanence. May writes:

In origami, there is no cutting of the paper. No outside elements are introduced into it. Everything happens as an expression of that particular piece of paper. It is only the paper that is folded and unfolded into new arrangements, those arrangements being the modes of the paper, which is the origami’s substance. The extension of the paper would be its attribute. If we can imagine the paper being able to fold and unfold itself, we come closer to the concept of expression.

Todd May – Gilles Deleuze: An Introduction

May is writing about philosophy, and in particular Deleuze’s interpretation of Spinoza. We don’t need to go there right now: I’m more interested in the idea of origami.

The potential for every fold, every permutation of the finished piece of art, lies within the single sheet of paper. And I’m going to extend that analogy to talk about generative artificial intelligence systems like large language models. AI outputs – whether image, text, prose, audio or video – cannot exist without masses of training data and the algorithms that turn that data into networks of probability. In the same way that the crane, the boat, or the jumping frog are all expressions of the same sheet of paper, every utterance from an application like ChatGPT is an expression of the data that it’s trained on.

The origami metaphor isn’t just poetic – it cuts to the heart of how GenAI systems work. When May talks about immanence through origami, he’s describing a system where everything that emerges comes from within. There’s no external magic, no consciousness dropped in from above – just like how ChatGPT’s outputs are always expressions of its training data, never truly “new” creations.

Viewed like this, artificial intelligence presents some interesting challenges, and the metaphor also helps to conceptualise some of the ongoing debates around what generative artificial intelligence is actually capable of.

green paper art on white background
Photo by Capped X on Pexels.com

From Paper to Data

You can fold an origami crane out of many things – paper, card, metal – but the properties of the paper will determine the size, scale, strength of the final piece. There’s also a limit to how many times you can fold a piece of paper (it’s 12, for the record).

In much the same way, the plane of data that artificial intelligence is trained on is one of the greatest determining factors of the output of the model. Sprawling energy-hungry models like GPT-4 consume as much of the internet as possible and produce powerful, general-purpose models. Smaller models trained on specific programming languages exist, and machine learning in general has a long history of training on niche data that far precedes the introduction of ChatGPT.

But as recent research has shown, you can’t just keep adding infinitely more and more data and expect the outcome to scale exponentially – like folding a paper 12 times, there are only so many ways you can fold the data to create new, more powerful models. And just like the strengths and limitations of the medium used for the origami exerts a control over the finished product, there are boundaries to what you can achieve with generative artificial intelligence.

Training and Folding

Once you’ve chosen the paper, you need to decide on what you’re going to make – will you fold that page into a plane, or a crane, or a jumping frog that will amaze your friends and family? Having selected which data to train on, AI developers must choose the right training methods and algorithms, which, like origami, folds and shapes the data, manipulating the internal network of probabilities to determine the outputs. The algorithms which infer connections and patterns in the data are more brute force than a skilled human folding carefully along designated lines on paper, but each algorithmic fold serves to define the possibilities.

paper airplanes on brown cardboard box
Photo by Ron Lach on Pexels.com

Illusions in the Folds: Creativity and Consciousness

A lot of time and energy is being spent on researching whether artificial intelligence can be creative or more creative than humans. In narrowly defined tests such as the Divergent Association Test, research suggests that artificial intelligence can be creative at scale, while others claim that AI can boost individual creativity but overall flattens and harms the creativity of groups.

I’ve challenged the idea that AI democratises creativity because creativity is something which is already inherently democratic. That’s not to say that there aren’t restrictions placed on human creativity – social, economic, racial, gender, educational – but that every human has the capacity to be creative in some way, if their context allows it. We don’t need AI to make us creative.

I think we’re asking the wrong kinds of questions of AI.

Just like the illusion of the paper crane is not a real bird, so the artificial intelligence is not really intelligent. The creative outputs of generative AI isn’t really creative. As May writes in his book, “there is no transcendence, only immanence” – the illusion of creativity or perhaps even consciousness is just that – impressive, but an illusion nonetheless.

To mix my metaphors for a moment, Bender and Gebru’s “stochastic parrot” still hasn’t sprung into life and developed its own personality – it’s as much of a paper bird as our origami crane.

Image via Adobe Firefly v3. Prompt in metadata.

Immanence and Potential

Technologies aren’t completely useless, but we have to be conscious of the utility that we try to ascribe to them, even if sometimes AI brings problems. Using artificial intelligence means being able to critique it, and that requires some technical understanding.

Large language models are constructions of data, bound by their own immanent nature, shaped by human decisions – the data, training methods, and algorithms. Unlike origami, however, it’s not a slow, methodical art. The “move fast and break things,” and “get too big to ban” approach of tech companies is almost the antithesis of the mindful folding of a paper crane.

All metaphors are imperfect. But we can use the origami analogy to slow down and think critically.

Activity 1: Paper Data

Duration: 45 minutes

Materials needed: Different types of origami paper (traditional, printer paper, card stock, etc.), origami instructions for a simple construction.

  1. In groups, students attempt to fold the same model using different types of paper
  2. Write down observations about:
    • How the paper’s properties affect the final result
    • Where the paper resists or fails
    • Which paper produces the best results
  3. Introduce and draw parallels to AI training data by discussing:
    • How different training data affects AI outputs
    • Why specialised AI models might perform better than general ones
    • The limitations of both materials (paper and data)

Activity 2: Creating A New AI Metaphor

Duration: 45 minutes

Materials Needed: Markers, large sheets of paper, coloured markers, access to Generative AI (optional).

1. Discussion on AI Metaphors: Begin with a brief overview of the metaphors commonly used to describe AI, like “stochastic parrots”, “origami” and “digital plastic.” Discuss why metaphors are important for shaping public understanding.

2. Origami Metaphor Workshop: In pairs, students create a new origami model that represents a metaphor or concept about AI. Each pair must think about a particular aspect of AI (e.g., creativity, data limitations, interpretability) and create an origami shape that represents it.

3. Presentation and Explanation: Each pair presents their origami metaphor to the class, explaining:

  • The AI concept they wanted to represent.
  • How their origami model captures or symbolises that concept.
  • Why they chose this metaphor and how it helps make AI concepts easier to understand.

4. Reflection on Metaphor Strengths and Limitations: Conclude with a class discussion on the strengths and weaknesses of metaphors in explaining complex concepts, specifically in the context of AI.

paper boats on solid surface
Photo by Miguel Á. Padriñán on Pexels.com

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨

Warning
Warning
Warning
Warning
Warning.

Leave a Reply