Human memories, ideas. Culture. History. Genes don’t contain any record of human history. Is it something that should not be passed on? Should that information be left at the mercy of nature? We’ve always kept records of our lives. Through words, pictures, symbols… from tablets to books… But not all the information was inherited by later generations. A small percentage of the whole was selected and processed, then passed on. Not unlike genes, really.
But in the current, digitized world, trivial information is accumulating every second, preserved in all its triteness. Never fading, always accessible. Rumors about petty issues, misinterpretations, slander…All this junk data preserved in an unfiltered state, growing at an alarming rate.It will only slow down social progress, reduce the rate of evolution.
You seem to think that our plan is one of censorship. What we propose to do is not to control content, but to create context. The digital society furthers human flaws and selectively rewards development of convenient half-truths. Just look at the strange juxtapositions of morality around you...
You exercise your right to ‘freedom’ and this is the result. All rhetoric to avoid conflict and protect each other from hurt.
The untested truths spun by different interests to churn and accumulate in the sandbox of political correctness and value systems. Everyone withdraws into their own small gated community, afraid of a larger forum.
They stay inside their little ponds leaking whatever “truth” suits them into the growing cesspool of society at large.
The different cardinal truths neither clash nor mesh. No one is invalidated, but nobody is right.Not even natural selection can take place here. The world is being engulfed in “truth”.
And this is the way the world ends. Not with a bang, but a whimper.
We’re trying to stop that from happening. It’s our responsibility as rulers. Just as in genetics, unnecessary information and memory must be filtered out to stimulate the evolution of the species. Who else could wade through the sea of garbage you people produce, retrieve valuable truths and even interpret their meaning for later generations?
That’s what it means to create context.
In this article I want to ask a question: if platforms and AIs are already shaping our every online interaction, what is the future of human-to-human communication?
Grab your tinfoil hats, people, because I’m about to take you on a ride. If that passage made you feel uncomfortable, then you probably recognise the sentiment: the use of communications channels to control the context of what people see, hear, and interact with. What you might not recognise is the source, because this isn’t strictly speaking a piece of dystopian fiction. It’s not a speculative story written about the dangers of social media. It’s a monologue from an AI in the video game Metal Gear Solid 2.
The release of that game in 2001 predates the launch of Facebook by a few years, but it picks up on the ideas of media control that extend back decades before the internet existed. The speech is delivered as a matter-of-fact explanation of why the AIs have decided it’s better for everyone if they control what the population sees and hears.
You’d be forgiven for thinking it’s an all-hands email delivered by Mark Zuckerberg, a press release from Google’s CMO, or perhaps a letter from the Office of the President co-signed by the National Chief of Social Media (I made that up. Or did I?).
It’s also a speech that is echoed in ideas from a popular online conspiracy theory known as the death of the internet – a theory that suggests that from around 2016 onwards, the internet has mostly been an empty, hollow place with more bots and AI than human users. A theory which is gaining traction again because of the growing impact of large language models like ChatGPT, as well as multimodal image, audio, and video generation.
The Death of the Internet
The idea that AI will replace all of our communications online is nothing new. In fact, some people – often in corners of the internet that you and I would fear to tread, like forums on Reddit and 4chan – have been saying this stuff since the early 2020s. In 2021, The Atlantic published an article on dead internet theory, and it feels increasingly more like reality than fiction. It’s a story of data collection, surveillance, and the gradual erosion of trust online.
Much of this conversation stems back to the pre-social media-era internet. Without getting into every technical detail, America’s Defense Advanced Research Projects Agency (DARPA) launched the LifeLog project in 2003, a research effort to create a “digital diary” of a volunteer’s entire life. The idea was to stream and index emails, phone calls, web history, location data and even biometric readings such as heart-rate, an ambition the brief colourfully described as tracking “every breath taken”. Intense public and congressional scrutiny followed, and the Pentagon cancelled LifeLog outright on 4 February 2004.
Similar proposals have been put forward by companies like Google with their “Selfish Ledger” speculative project, where Google suggested they could theoretically capture the details of user’s lives like a digital human genome project designed to model human behaviour. If that sounds familiar, it’s because it’s almost identical in scope to the villainous Metal Gear Solid AI quoted at the start of this article. Truth is sometimes just as strange as fiction.
And there is some truth to these theories. We have seen, for example, companies like Cambridge Analytica use data collected through innocuous-seeming Facebook apps to profile demographics and influence voting preferences in the US and UK. Author and researcher Carissa Véliz has written about these things in her excellent book Privacy Is Power: sobering reading for anyone that uses social media.
Bots and artificial intelligence-driven accounts are used to drive fake engagement, and platforms like YouTube and X are boosting certain ideas and suppressing others, essentially tuning the algorithms of billions of users. And as for the dead internet theory that the majority of the online world is now bot rather than human, just open up your Facebook account if you haven’t already deleted it. Most people access the internet via a handful of apps controlled by an even smaller number of large corporations.
The way we communicate online – in text, but also in audio, image, and video – is tightly controlled and “authorised” via these platforms which dictate everything from who can see our content, to how many characters we are allowed to type. As we contribute to these platforms, our communications are also weaponised; used to sell us targeted ads, or to persuade us to “vote yes” to Brexit. More recently, that data has been used to train massive AI systems.
One of my favourite writers on this topic – no tin foil hats in sight – is Molly White. Though White mostly writes about crypto, she is also a huge believer in the open internet, a longstanding Wikipedia author, and a great writer. These excerpts are taken from her article We Can Have a Different Web which is well worth reading in full (or listening to, since White understands the importance of multimodality and makes podcasts of all her newsletters).
When I envision the web, I picture an infinite expanse of empty space that stretches as far as the eye can see…
…eventually, businesses set up shop, selling everything from seeds to tractors to garden gnomes to landscaping services to all the kinds of things people were used to using back outside of this digital expanse.
And at first, they fit in among the hobbyist plots and community gardens.But with time, businesses learned there were other ways to extract money from the community that had grown within this acre in the digital world. They set up tolls on the pathways. They planted invasive species that encroached on what other people had built, shading them out — or they spread pesticides that poisoned what others had cultivated. Some acquired plot after plot after plot, building their own empires through which others needed to pass to get where they were trying to go. Many businesses initially invited people in with open arms, promising that if they moved within their boundaries, the business would take care of all the hard stuff — the digging, the weeding, the sowing — and let you just do the parts you wanted to do. After a time, many people opted to do so, drawn in by these easy and free services that let them spend more time admiring the flowers or visiting neighbors and less time doing the dirty work. But then, the walls went up…
…The businesses developed systems to quickly usher people along from undesirable tenants, drawing their attention to the carefully manicured plots where nary a blade of grass was out of place. And they started checking IDs at the door, making sure you were known both to the business owners and the policemen who had set up watchtowers and CCTV networks. Increasingly, drones passed overhead, operated by businesses who peered in to see what kinds of plants you were growing and what kinds of decorations you were putting up in hopes of selling you something similar later on.
If a tenant decided they were sick of their spot within a walled garden, well, they could leave — but it meant they abandoned what they had built, and the path for friends or admirers of their work to come visit them became a lot more arduous to traverse.
This is the world of today’s web.
In White’s analogy, the internet itself is a substrate: a largely empty expanse of land ready for anyone to set up a small acreage. But over time walled communities represented by companies like Meta have sprung up to capture our attention and our content. I have written before about the strong ties that bind generative artificial intelligence to social media: all of the most powerful LLMs stem from the data and wealth created through two decades of extraction on platforms like Facebook, Amazon, and YouTube.
So, if we follow these various trains of thought, technology companies have spent two decades turning our every online interaction into data, and that data – along with the vast data of the Commons – has contributed to the creation of generative artificial intelligence. Now, generative artificial intelligence threatens to further destroy the integrity of our online world.
But what does all of this have to do with communication? And how did I end up down this rabbit hole in the first place?
Overcome by Sadness
You can thank this post from educator Jason Gulya, where he reflects on a recent essay by philosopher Luciano Floridi on the concept of “distant writing”. Gulya describes his initial reaction to Floridi’s idea as being “overcome by the sadness of it”:
When I shared my thoughts about Jason’s post and Floridi’s “distant writing”, Miriam Reynoldson, fellow Australian PhD victim, suggested that it’s not just writing that’s impacted by this distancing, but all forms of communication. I tentatively agreed. Insofar as technologies are already threatening online communications, I linked it to the dead internet theory, but I also suggested that maybe this is something of a course correction: that the way we have come to communicate in the past two decades since the birth of social media is deeply unnatural, and that the bots and AI-generated YouTube slop is distancing us from online communications in a way which might ultimately restore some balance.


Like Gulya, I’m still trying to untangle my own thoughts on whether AI writing will ultimately be good or bad for communication.
But a change of gears is in order. Let’s move from the death of the internet back to the more well-trodden path of the death of the author.
Distant Writing and the Death of the Author
Reading Luciano Floridi’s paper on distant writing was, if nothing else, a useful reminder of how tightly philosophy clings to the idea of text as “the written word”. Floridi defines “distant writing” (or wrAIting*) as a practice in which I, the human author, design the constraints and a large language model carries the prose to completion. I am a “meta-author” who details the narrative blueprint while the model does the bricklaying. It’s perhaps a decent enough description of what many of us already do when we hand prompts to ChatGPT and then curate the word salad that comes back.
My problem is less with Floridi’s argument than with its scope. His discussion of narrative possibility, modal logic, and authorship never strays beyond paragraphs and sentences. Even the rhetorical flourish – “What will happen when we think through wrAIting?” – assumes that thinking remains fundamentally textual. That omission is pretty telling: not everyone “thinks through writing”. Whilst I agree with the premise of John Warner’s book More Than Words that writing is one of many means for thinking and feeling, it certainly isn’t the only means. In fact, we have a rich cultural history that is entirely unrelated to the written word, and we are already seeing signs that communication online has moved back to the verbal and visual.
And then we return to the hottest new flavour of online communication: GenAI. Though Floridi’s distant writing is predicated on the use of large language models to produce text, much of the industry behind the tech has already shifted its sights towards video and audio. In 2025, all of the major models are natively multimodal: GPT-4o incorporates image recognition, image generation, speech-to-speech and, through the partner application Sora, video generation; Gemini creates code, images and video in a single context window, and the recent release of Veo3 adds dialogue and audio to the video generation. Writing is now only one of many modes in the scope of an LLM-based product.
I’ve been making this point for a while. Back in April 2024 I observed that the research conversation “is still squarely on text-based models, and in particular ChatGPT,” a focus that “neglects the opportunities and the challenges presented by multimodal generative artificial intelligence.” In June, I returned to Gunther Kress and Theo Van Leeuwen’s Multimodal Discourse to argue that the term “multimodal” has been flattened by marketing, and that we need a richer semiotic vocabulary to describe AI systems that “appear to understand language, interpret speech, produce images, ‘read’ images, and so on.”
Seen through that perspective, Floridi’s distant writing is less than half of the story. Yes, large models can “flatten the strata” of line-by-line composition and text production, but they also collapse image, audio, gesture and spatial design into the same conversational prompt. If authorship is drifting from execution to design, then design now spans colour palettes, voice-over timbre, camera movement, even the choreography of avatars.
But this is where we have to consciously define what we mean by design. Floridi positions the human as a meta-author, an intermediary who specifies constraints for the LLM to fulfil. But in multimodal theory, particularly in Kress and Van Leeuwen’s sense, Design is not “constraint-setting”, it’s a semiotic stage: the moment when meanings are orchestrated across modes to realise a communicative intent. It’s a meaning-making act, not a systems-engineering one. The meta-author is a technician. The multimodal designer is a rhetorician.
The philosophical implications go deeper. Referencing Barthes’ Death of the Author, Floridi rightly notes that authorship has long been a contested idea, and suggests that while Barthes’ assertions may have been exaggerated the death of the author could now be followed by the “birth of the author as a designer”.

But Floridi stops short of, for example, the more radical decentring offered by Foucault’s “author-function”: a construct of institutional and discursive control rather than individual authorship. For Foucault, the author is also a mechanism through which we filter legitimacy, intention, and interpretive authority. The question is not only ‘Who wrote the text?’ but also who gets to be read as authorial.
In the context of multimodal AI, the author-function splinters even further. Who is the author of an AI-generated video, voiced by ElevenLabs, animated with Veo3 from a seed image generated in Midjourney, and edited by an 18 year-old influencer for a TikTok post? Are you the author if you write the words, but an algorithm produces the visuals? What about if an LLM generates the text, and you sing the lyrics? Authorship is distributed among humans, models, platforms, and algorithms.
This point has been well-articulated in another recent LinkedIn thread (honestly, if you’re avoiding LinkedIn because you think it’s just full of skeezy marketers then please reconsider and come hang out) initiated by Jonathan Boymal, which I’ll include below. Boymal draws on Hannah Arendt to reframe authorship as stewardship: a responsibility to tradition, care, and community as opposed to the Romantic ideals of the isolated genius. Others, including Ilkka Tuomi and Owen Matson, extended the conversation toward relational ethics, a critique of Cartesian interiority, and posthumanism. The author is not an isolated mind issuing prompts but a node in a network of semiotic, ethical, and social relations.
Floridi’s essay, then, is not wrong, but it is narrow. Maybe that’s OK: Floridi didn’t make any pretensions of going beyond the written mode, and you can’t try to shovel every single philosophy of writing into an article (that’s what LinkedIn posts are for). Distant Writing diagnoses the shift from handcrafted prose to prompt-driven generation, but it does leave the rest of the semiotic landscape hanging. Filling that void means asking how large language models and related technologies handle multimodality, image politics, the persuasiveness of audio and video, and the vast world of digital communications beyond the written word.
Maybe we are heading towards a kind of distant communication, rather than just distant writing. As Miriam Reynoldson suggested, it’s not hard to imagine the ideas outlined in Floridi’s essay can be extended to the multimodal communications that fill our digital lives.
It’s already easy to picture a time when you open up your emails and the majority of them were written in part or entirely by AI. This could extend well beyond emails. Imagine every Zoom call filled with HeyGen-style avatars. Every YouTube video generated top to bottom—script, video, dialogue, and music soundtrack, all created by AI. Every time you pick up a phone (assuming, of course, that phones exist by this point and haven’t been totally replaced by sunglasses), having to question whether the entity at the other end of the line is a real person or some sort of technological soup; a human-scripted, LLM-generated, voice-cloned facsimile.
And that, dear reader, brings us full circle back around to the tinfoil hats.
* I have to admit some bias here against the term wrAIting. For reasons entirely beyond rational thought, I hate portmanteaus and the injection of acronyms into the middle of perfectly good words. See also: “edufluencer”, “edutainment”, “educAItor”… These things make me recoil in horror, snarling like a pit bull chewing a wasp.
Creating our own Context
When the Metal Gear Solid AI proclaimed that it alone must “create context”, it implied humans are helpless before the torrent of our own words, incapable of cultivating meaning without algorithmic overseers. After observing two decades of data-extraction, watching writers and educators grieve for the dwindling craft of hand-made prose, and probing the cracks in authorship opened by multimodal AI models, it might seem like we’re already too far along the path laid out by tech companies and digital platforms. But the Metal Gear Solid AI’s prescription – a central authority that filters culture like a eugenicist prunes a gene pool – lands exactly where dystopia always does, in the fantasy that control cures complexity.
The dead-internet pessimists are partly right. Bots do saturate timelines, platforms do gatekeep attention, and authentic dialogue is harder to recognise than ever. But the internet-prophets falter in mistaking structural damage for total collapse; in much the same way Floridi’s argument is too narrow to account for the full richness of online communications. The garden is walled in, but not razed. Wherever someone exerts more control over their AI copilot than Floridi’s “meta-author”, or chooses a slower form of text – newsletter, long-form video, blog – over the algorithm-optimised scroll, the craft that Gulya described lives on.
That sense of care brings us back to stewardship, and Arendt’s call to tend a shared world, surfaced in Boymal’s post, offers a sturdier ethic than either Floridi’s “meta-author” or the Metal Gear Solid AI’s technocratic curator. Stewardship assumes the messy world of digital communication and insists on maintenance: it calls for mindful custody and the acknowledgement that we have never created our texts in isolation.
It was T. S. Eliot that warned the world might end “not with a bang but a whimper,” a quote co-opted by the villainous video game AI to describe the collapse of the online ecosystem as we drown in “truth”. In Floridi’s Distant Writing, some people see an equally bleak future where much of the joy of human communication is flattened to a thin line running directly from prompt to output. But that future is not inevitable. Whenever we decide to write, or speak, or produce film, or images, or code, with deliberation, we are designing. Not in the Floridian sense of “employing Large Language Models (LLMs) to generate narratives, while retaining creative control through precise prompting and iterative refinement”, but in Kress’s sense of Design as intentionality and the nuanced interplay of modes.
If that sounds abstract, let me use this article as one example. I designed the article as a response to Jason Gulya’s LinkedIn post. In forming my initial ideas, I listened to several podcast episodes, watched a few YouTube videos, and outlined (on paper) some early ideas. I went out for a walk and dictated a few draft sections. Then, I saw Jonathan Boymal’s post and the subsequent comments, and it stirred some new ideas which related back to the Foucauldian framing of my PhD, and the posthumanist ideas I discussed in 2023 with my supervisor.
I took the LinkedIn conversations, the draft audio recordings, and some of my notes and pasted them haphazardly into WordPress, including writing some custom HTML to embed the social media posts directly, and sourcing a few relevant images. I used a combination of ChatGPT o3 and Claude 4 to proof read and make structural edits, and then I returned to the post itself for final edits. I used AI, but my writing is far from “distant”, and it is more than just words. I’ll share this post on social media, within those walled gardens erected by companies like Microsoft and Meta, but it exists on my little plot of land at the domain I own and have owned for fifteen years.
I don’t think that all writing with AI will become “distant”. I don’t think that “writing” ever really meant just “words”.
And I think that the death of communication, the author, and the internet at the hands of AI, has been greatly exaggerated.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply