The Effort Economy of Slop

I came across a new definition for AI slop the other day on Simon Willison’s blog, via a Bluesky user:

Slop is something that takes more human effort to consume than it took to produce. When my co-worker sends me raw Gemini output, he’s not expressing his freedom to create, he’s disrespecting the value of my time.

That definition crystallised something I’ve been circling around for a couple of years now in articles about multimodal GenAI, digital plastic and slop. It’s something about the relationship between effortful production and the burden placed on the person at the other end to make meaning, respond, or interact with the thing being designed and created.

But first, here’s a photo of the sunset from the back of my house.

A serene landscape at dusk showcasing a gradient sky transitioning from orange to blue, with wispy clouds and a crescent moon visible. Silhouetted trees line the horizon over a grassy field.

There wasn’t much design involved in this photo. I saw a sunset out of the window, walked outside, and snapped the photo on my iPhone. But even that minimal effort required some conscious and unconscious choices that would have been stripped away had I just opened an AI app and requested “photo of sunset.”

A scenic coastal path leading towards the ocean at sunset, with a vibrant sky displaying hues of orange, pink, and purple. A fishing boat can be seen in the distance, surrounded by gently rippling waves.
“Photo of a sunset” in Google Nano Banana Pro

As I walked outside, barefoot, I stepped on a branch that one of my children had kindly left on the veranda and then continued across dry coarse grass to the other side, pulled out my iPhone and regretted not bringing a better camera with me. I took a couple of photos and honestly, by the time I’d managed all of that, the moment had passed and the spectacular sunset I saw through the window had largely disappeared.

Still, it took effort to produce.

Your consumption of it — here on the blog, or elsewhere on the LinkedIn post where I shared this quote first — probably took less effort. The effort of thinking, “Huh, nice amateur photo of a sunset.”

The strata of meaning-making

In an earlier article on the semiotics of synthetic media, I suggested that GenAI was compressing the strata of discourse, design, production and distribution into a thin smear. That compression might have some positive qualities — the “miraculous” qualities that semiotician Roland Barthes saw in plastic — but it’s also a compression that strips away the layers where meaning accumulates.

If semiosis – interpreting meaning – begins at the level of production, what happens to meaning when production becomes trivial?

Each layer of the strata has historically required effort, and new technologies have always reduced the effort required. The printing press reduced the effort of distribution. Design software has reduced the effort of both design and production. The internet has reduced the effort of discourse — that gathering of ideas and knowledge that underpins all forms of communication.

But GenAI flattens all of the strata at once, violently, suddenly. And it redistributes the effort, pushing it away from the producer and onto the consumer, who now has to do the work of sorting, evaluating, interpreting and discarding.

The effort economy of communication

All communication involves an implicit economy of effort between producer and consumer. In most human communication, the producer bears the greater share: choosing words carefully, designing for an audience, revising, editing and making judgments about what to include and exclude. The consumer’s effort is arguably lower – reading, viewing, listening, interpreting – and the gap between the two is where much of the value of communication sits.

When a writer revises a sentence six times, the reader benefits from that invisible labour. The effort is asymmetric by design, but generative artificial intelligence inverts it. The producer’s effort drops close to zero – prompt/click – while the consumer’s effort stays the same or increases.

The consumer must now read more carefully, check for hallucinations, evaluate quality, determine provenance, assess whether the content was produced with any understanding of the subject at all. The consumer is doing all of the semiotic work that the producer used to do.

This is different from the asymmetric effort of the paper Why Slop Matters, which focused on the gap between what AI produces and what a human would need to produce the same output. The argument here is about the transfer of effort from producer to consumer, not just the reduction of effort in production.

Digital plastic and the inversion of care

Barthes called plastic “miraculous,” but he also referred to it as “graceless” because it destroys “the pleasure, the sweetness, the humanity of touch.” The gracelessness of digital plastic is not just aesthetic, it’s relational. Sending someone raw AI output, as the quote at the start of this article puts it, is not expressing the freedom to create: it’s disrespecting the value of someone else’s time.

The bulk generation of AI images flooding the web. Articles that no human hands touched. Clickbait videos on social media. In fact, AI slop videos on social media are even worse than clickbait, because at least clickbait had intent. There was a human behind it making cynical, manipulative choices, but choices nonetheless. AI-slop content doesn’t even rise to that level; it’s production without intent, communication without a communicator.

The creators of these artifacts have not engaged in design, production or even basic distribution choices. When the algorithm is the author, the human producer (or commissioner, maybe) has abdicated the semiotic responsibilities that Kress and Van Leeuwen located in the production layer. The meaning-making work doesn’t disappear. It transfers.

Consumption as labour

In a slop-saturated online environment, consumption itself becomes a form of labour. The idea of the audience commodity extends back in media studies to at least the 1970s, so this is not necessarily a new idea, but GenAI gives it a new material dimension. The consumer is not just providing attention as a commodity; they’re doing genuine, hard cognitive work that the producer declined to do.

Think about the educational implications of this. When a student submits AI-generated work, the teacher becomes the primary meaning-maker, reading, evaluating, interpreting and providing feedback on text no human invested cognitive effort in producing. The teacher’s labour increases while the student’s decreases. The same dynamic plays out when a school leader receives an AI-drafted policy, a parent reads an AI-generated newsletter, a colleague opens an AI-written email, or a student is faced with an AI-generated resource.

As I discussed in last week’s post about resistance, if generative AI reduces production effort to near zero, the resistance that produces learning and growth has to come from somewhere. In an educational context, the resistance should sit with the learner – the producer of work – not the teacher as the consumer of it. Slop inverts that relationship.

What counts as effort?

The effort in my sunset photo was not technical skill or production time. It was presence, attention, embodied experience: the branch on the veranda, the wish for a better camera, the act of trying to walk outside at the right moment and failing. These are forms of effort that GenAI cannot replicate because they are situated, physical and personal.

It’s a simple heuristic that captures the earlier definition of slop: if the effort to consume what you’ve made exceeds the effort you put into making it, you are producing slop. This applies whether you’re using AI or not, but AI makes it dramatically easier to cross that line without even noticing.

We see this kind of slopification in the real world all the time, though usually as transfers of physical or logistical labour rather than cognitive labour. It’s scanning your own groceries at the self-serve checkout. It’s the “convenience” of flatpack furniture that saves on shipping and warehouse stocks. It’s app check-in and printing your own baggage tags.

Slopification, like Cory Doctorow’s enshittification, might be seen as the inevitable product of consumerism. But things don’t have to be this way.

If you’ve invested genuine cognitive, creative or situated effort in production, the consumer will almost certainly sense it, whether you’ve used AI or not. You will sense it too. As I said in the framework for resistance, in metacognitive terms we know what learning feels like. And we know what it feels like to not learn. To not use our brains. To be both producer and consumer of slop.

The cost of costless production

Physical plastic’s great advantage was that it was cheap to produce. The great cost was environmental: downstream, externalised, pushed into ecosystems, future generations and onto communities with less power.

Digital plastic follows the same logic. The cost of production drops, but the cost does not disappear. It transfers to the consumer, the information environment, the people downstream who have to sort the synthetic from the situated.

If we are going to treat GenAI seriously as a technology for communication, we need to account for the full cost of what we produce. Not just the effort on our end, but the effort of the person receiving it. That accounting is a question of design: who is your audience, what do they need, and what are you asking of them?

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨

Leave a Reply