Lately I’ve been digging in to the metaphor of Generative AI content as “digital plastic”. I first wrote about it back in 2023 in this article, where I defined the media as a “mass produced, synthetic form of data that like its physical counterpart doesn’t degrade much over time” and likened it to the way physical plastic has left its mark on the geological record, suggesting that AI-generated content could leave a similar “layer” in the digital ecosystem.
Earlier this year, I followed up that original post with a second which extended the environmental analogy and critiqued the business model of AI developers responsible for these “slop producing” machines that are currently flooding the internet.
But as I mentioned in the original post, like real-world plastics Generative AI content will have positives as well as negatives. Much of the nuance can be found in the production and distribution of these media, terms which I’m borrowing and updating from social semioticians Gunther Kress and Theo Van Leeuwen, in their text Multimodal Discourse Analysis (a text I’ve written about elsewhere for its usefulness in framing multimodal GenAI). The “plastic” analogy for media also features in that text, though it takes a retrospective look at the impact of real plastics rather than digital media.
In this post, I’m continuing to bounce these ideas around, including going back through criticisms and praise for plastics which extend back for decades, and which might help to illuminate some of the nuances of digital plastic.
Production
Kress and Van Leeuwen’s text breaks “meaning making” down into four strata: discourse, design, production, and distribution. I looked closely at the first two in the earlier post, exploring how both discourse and design play a part in my own use of multimodal GenAI for this blog and my other writings. But production does not only realise design work, it contributes to meaning-making. In fact, in Kress and Van Leeuwen’s words, “semiosis begins at the level of production.” But what does this actually mean for my metaphor of digital plastic?
Form my perspective, it suggests the meaning-making process starts the moment content is created, not just when it’s consumed. For AI-generated content, this has important implications:
- Algorithm as author: The choices made in designing and training AI models are inherently “meaningful”. What data is used for training? What is excluded? What biases are encoded in the algorithms? These decisions shape the content before it’s even generated.
- Multimodality: When we use AI to generate text, images, audio, or video, the choice of mode itself is part of the meaning-making process. A GPT-generated story carries different meanings than a DALL-E generated image on the same topic, and different encoded values will be represented – I’ll demonstrate that in a moment.
- Technological context: The capabilities and limitations of AI systems influence what can be produced and how. This technological context is as much a part of the meaning as the content itself. Different users (authors) have access to different tools and resources.
- Power dynamics: Who has access to these powerful AI tools? How are they being deployed? Who owns the data? The distribution of power in AI content creation is shaping our digital landscape.
- Constraints and affordances: The specific features and constraints of AI models (like token limits, speed, or image resolution) affect the produced content in ways that become part of its meaning.

Understanding that semiosis begins at production challenges us to look beyond just the flood of AI-generated content. We need to critically examine the systems and processes that produce this digital plastic. It’s not enough to moderate the output; we must also scrutinise the production pipeline itself.
In the context of digital plastic, this perspective brings to mind questions like:
- How does the distribution of access to AI tools shape our digital discourse?
- How do the training datasets for large language models influence the content they produce?
- What meanings are encoded in the choice to use AI for content creation in the first place?
But this talk of multimodal discourse can be quite abstract. We need some more concrete examples.
PowerPoints, Parrots, and Plastic Trees
Think about the ways in which Word Processors (like MS Word) and PowerPoints influence how we produce text. You might design a presentation or a report with a certain audience in mind, but the platform you chose to produce the text exerts its own influence.
Microsoft Word ships with default line spacing, preferred fonts which change over time (Times New Roman, then Calibri, now Aptos), and affordances such as set titles, headings, and other styles and formatting quirks. Most users are content to stick within the boundaries of default settings, whereas some will actively rail against them.
Some people love PowerPoint for its simplicity and easy to use templates. Some organisation, like big consulting firms and government departments, use PowerPoints so much that it has become a cultural meme. Others, like The Guardian’s Andrew Smith, believe that the dot-point slides are “killing critical thought“. Again, whatever your stance on them, PowerPoints exert an influence on meaning in the production of texts.
Now turn to Generative AI. Emily Bender, Timnit Gebru, and others famously referred to Large Language Models like GPT as “stochastic parrots”, in that they randomly repeat back to us the churned up contents of their massive training datasets. But viewed from the perspective of ‘production’, these stochastic parrots are more than just dataset echo chambers. They’re complex systems that shape the very nature – and meaning – of the content they produce. Just as Word and PowerPoint influence our writing and presentations, AI models exert their own unique influence on the digital plastic they generate.

Consider the default “settings” of an AI model. These aren’t as obvious as font choices or slide layouts, but they’re just as impactful. The model’s training data, its architecture, safety features, even the prompts used to generate responses – all of these act as invisible defaults that shape the production. And just like how most people stick with Times New Roman or Calibri, many users of AI tools accept these defaults without question.
The cultural influence of AI-generated content is also becoming increasingly apparent. Just as PowerPoint has become synonymous with certain types of corporate communication, AI-generated text is developing its own recognisable style. We’re starting to see certain “tells” that indicate AI authorship – perhaps a bit too fluent, a bit too generic, or filled with “delving” and “navigating”.
Returning to my main analogy, Kress and Van Leeuwen also provide a brief history of critical attitudes towards real-world plastics extending back to the 1930s. As a medium of production, plastic has been both lauded and criticised – much like we’re seeing right now with Generative AI text, images, videos, and music.
1930s artist and designer Paul Frankl wrote that, through plastics, “base materials are transmuted into marvels of new beauty” and philosopher and semiotician Roland Barthes praised them as “miraculous” and stated in his text Mythologies that, “more than a substance, plastic is the very idea of its infinite transformation; as its everyday name indicates, it is ubiquity made visible.”
In fact, “ubiquity made visible” sounds like a perfect term for the AI-generated content currently flooding our search results and social media feeds, and Barthes wasn’t only praising plastic for its ubiquity. Later in the same text, he writes that the substance is “graceless” and “destroys all the pleasure…the humanity of touch”. This is all sounding pretty familiar…
Even French socialist and philosopher Jean Baudrillard (whose work was also the inspiration for The Matrix, whether he wanted it to be or not) had a crack at synthetic materials, writing:
The manufacture of synthetics signifies for materials a stepping back from their natural symbolism towards a polymorphism, towards a superior level of abstraction which enables a game of the universal associations to take place. (from The System of Objects, 1996, quotes in Kress and Van Leeuwen, 2001 and in this more article on ‘Dematerialization’ by Van Leeuwen)
If I’m reading this right, Baudrillard is suggesting synthetics are freed from the inherent symbolic meanings associated with natural materials. This allows them to take on multiple forms and meanings, enabling a more abstract and universal system of associations for designers.
With the metaphor of “digital plastic”, I think we can view AI-generated content in a similar fashion. Like synthetic materials, AI-generated text, images, or videos aren’t bound by the ‘natural symbolism’ of other human-created content, even those texts produced via other digital platforms like word processors and slideshows.
This flexibility allows the ‘game of universal associations’, where we can draw on both the vast training data and the multimodality of GenAI to produce novel combinations. But this abstraction also means that AI-generated content lacks the grounding in lived experience that gives human-created content its depth and resonance. There’s a balance to be struck between the flexibility and ubiquity of AI-generated content and the authenticity and intentionality of human creation.

A New Language for Digital Plastics
Kress and Van Leeuwen close their discussion of production with a quote from Sylvia Katz, plastics expert at the Victoria and Albert Museum, who predicted in 1990 that “before the century runs out a new lexicon will have to be created for plastic, a terminology that can be understood in a direct, non-technical way.” In many ways, Katz’s prediction has come true.
While we haven’t developed an entirely new lexicon for plastics, our language around them has indeed evolved. Terms like “microplastics,” “biodegradable,” “compostable,” and “ocean plastic” have entered common usage. We now have a more nuanced vocabulary to discuss the environmental impact of plastics, with phrases like “plastic pollution,” “plastic-free,” and “zero waste” becoming part of everyday discourse.
But this evolution in language has been driven less by the need to describe plastics themselves and more by the urgent need to address their environmental impact. The new terminology is less about the technical or production properties of plastics and more about their lifecycle and effects on the world around us.
Thinking about the implications of AI-generated content, we’re in a similar position to where we were with plastics in the 1990s. We’re developing new terms and concepts to describe this digital phenomenon – “digital plastic” being one such attempt. There’s an element of both the technical and the societal; the synthetic production of media, and its impacts and implications for our digital ecosystem.
This parallel underscores something else I’ve written about a lot recently: the importance of developing clear, accessible language to discuss new technologies and their effects. As we work to understand the challenges and opportunities of AI-generated content, perhaps we too will need to create “a new lexicon… that can be understood in a direct, non-technical way” to fully grasp and communicate the impact of digital plastic on our digital world.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply