The Myth of the AI First Draft

I write daily, often for hours at a time. I’m an author of fiction and nonfiction, and even when I’m not writing something for publication I fill notebooks with ideas and frequently illegible scribbles. I’ve also taught writing for almost two decades, and I understand that not everyone loves writing as much as I do.

Yet, even as a lover of writing and a writing teacher, I find the value education places on the ability to write often does more harm than good. This position sometimes gets me into trouble as an English teacher. Of course, we want one of the core skills of our subject to be highly valued, but I will die on this hill: people do not need to love writing or to be highly skilled writers in order to express themselves, be creative, or demonstrate their knowledge.

Unfortunately, this approach to holding up writing as the apotheosis of academic performance has led us into a pretty sticky situation with generative artificial intelligence.

As of November 2020, any person with an internet connection has access to a sophisticated large language model which can generate cohesive, believable, human-like text. Whether or not you think the output of ChatGPT and co. is actually any good is not the point.

The point is a tool now exists, which means anyone can write. Because of the high value placed on writing in both education and society, these technologies have been perceived as a threat. The way these technologies are being used also reflects the paradigms of that broken system. Rather than finding ways to use these tools creatively, many have simply ended up doing more of the same, more quickly, and with less human input.

One of the biggest problems I see with generative AI and writing is something I’m calling “the myth of the AI first draft”. The idea comes from a good place: in response to concerns that AI will just be used to cheat and write entire academic essays, news articles, or other publications, people started to think up ways artificial intelligence could be used more appropriately.

An incredibly common use has been to promote artificial intelligence as a way to write the first draft. This suggested use is so pervasive that it’s made its way into major generative AI products, including as an accessibility feature in Microsoft Copilot, and as a core feature in Copilot for Microsoft 365.

Microsoft’s video presents the use of GenAI Copilot as an assistive technology that can help write “first drafts” for blind and dyslexic users, but we should be cautious of how we use the technology for first drafts.
In Copilot for Microsoft Word, opening a new document automatically unleashed the draft-assisting chatbot.

This idea of the AI first draft is attractive for writers who face the fear of the blank page; younger writers and students, in particular, often say they “don’t know where to begin”. Putting pen to paper – or fingers to keyboard – can seem daunting, and using AI to create a first draft appears to offer a way to remove that pressure. But, seductive as it is, I think using GenAI in this way is not leaning into the potential of either the technology or of the writer.

Again, I want to stress that I know everybody doesn’t love writing as much as I do, and not every person is capable of writing. But irrespective of whether people can write, people are capable of thinking for themselves, capable of creativity, humour, and of having something valuable to say. That’s why I think there are a few reasons to be cautious of the AI first draft.

Image created in Adobe Firefly 2: Details in metadata

The computational unconscious

Let’s start with the heavy stuff.

Jonathan Beller gave us the term “computational unconscious” to explain how digital computation and capitalism have merged, creating a background process that shapes our thoughts and actions without us realising it. Within the context of Large Language Models, this concept highlights how algorithms and data subtly influence the creation and understanding of texts, suggesting that our interactions with these models might unconsciously reflect and reinforce existing biases and ideologies. The dataset reflects our biases, and our use of LLMs powered by the dataset amplifies them.

Drawing on the computational unconscious as a basis for a first draft implies that the initial creative output might carry underlying biases and structures from the data it was trained on, potentially shaping the direction and content of the final piece in unnoticed ways. The bias in LLMs is increasingly subtle, due in large part to the human labour and feedback that goes into guardrailing and sanitising models like GPT.

Bradley Robinson, in his Speculative Propositions discusses how the “computational unconscious,” shaped by global digital infrastructures and their inherent biases, interacts with and potentially modulates human subjectivity, especially in writing. He argues automated writing technologies ( which he labels AWTs) like GPT not only offer efficiency but also risk suppressing the individual’s capacity for resistance by assimilating their language and thought processes.

This integration into the computational unconscious carries the potential to transform writing practices and subjectivity, embedding platform capitalism’s oppressive structures into the very act of composing text, challenging traditional notions of literacy and individual expression.

There is, as yet, little research into exactly how much impact the dataset/algorithm combination of the LLM has on the writer. Datasets and development methods also vary from developer to developer, and no LLM’s output is the same as another, so it would be hard to compare the effect from model to model. But it is widely known that the composition of the dataset, particularly large scrapes of data like Common Crawl, tends towards biases and toxicity. All datasets are biased, and all GenAI – even those which have been “made safe” – reflect the narrow and homogenous worldview of the computational unconscious.

Image created in Adobe Firefly 2: Details in metadata

The purpose of writing

Aside from those rather complex issues, having AI write a first draft, which is then edited by the human writer, is essentially the same as outsourcing the first draft entirely to a different person. Outside of activities for teaching students how to edit, I’ve yet to meet a writing instructor who would set a task where a student is encouraged to have a friend or family member, or in the case of ChatGPT, a middle-aged white American man they’ve never met, write their entire draft, before they tidy it up and hand it in.

Unfortunately, when the purpose of writing is to tick an academic checkbox, it makes perfect sense for students who are unwilling or unable to write to turn to a shortcut. After all, this gets them to that endpoint by the quickest route possible. That isn’t, or shouldn’t be, the purpose of writing. The purpose of writing isn’t just to demonstrate knowledge in the most expedient way. Writing is to explore knowledge, to connect and synthesise ideas, to create new knowledge, and to share.

When students use AI to generate a first draft, they skip 90% of that work, creating something that may well be worth sharing, but which has not in any way helped them form and make concrete their own understanding. The system surrounding writing in education encourages us to take shortcuts. We need to resist that temptation and change the system to value the real reason.

In fact – and I’m going out on a limb again here as an English teacher – if the purpose of an assessment is to check a student’s knowledge of a subject then 9 times out of 10 I don’t think writing is the best way to achieve that.

We can do better.

Irrespective of whether an individual loves writing, feels they have the skills to write, or absolutely hates and rejects the idea of writing, people do a better job than LLMs at creating ideas worth sharing. There are a few camps forming around AI and creativity, sometimes using research into creativity indexes or divergent thinking tests, and pitting AI models against humans.

Sometimes the AI models outperform humans on elements of these tests, and at other times humans far outperform the models. It is highly dependent on the sophistication of the model. For example, comparing a human to GPT-3.5 is entirely different than comparison with a more sophisticated model like GPT-4. As models continue to improve, no doubt, AI will perform well on these tests.

That doesn’t mean the individual writers still can’t do better. Because this really depends on the definition of what better looks like. Again, this relates to my previous point: we define what success looks like in writing. If we want to generate 100 creative ideas in under 20 seconds, select the best 10, and produce persuasive marketing copy for a key demographic for an advertising agency, then yes, a LLM will probably do better than humans.

But if the point is for a writer to engage with an idea that they’ve been grappling with, put their unique spin on it, add some of their personal perspective, and ultimately to understand the idea better themselves, then by whatever methods we can achieve, it is still better for the writer to create that first draft themselves.

Processing…
Success! You're on the list.
Image created in Adobe Firefly 2: Details in metadata

The SFD and the power of the blank page

It doesn’t matter what you write in your first draft, so much so that author Anne Lamott’s shitty first draft (SFD) has become a catchphrase among writers. Lamott dispels the myth that good writing flows effortlessly from the onset, sharing personal anecdotes and experiences to highlight that even successful authors struggle with their initial drafts. She advocates for permitting bad writing as a means to overcome the paralysis of perfectionism, and suggests through the mess and mistakes of first drafts, writers can find their path to clearer, more polished subsequent drafts.

Using GenAI does indeed create a shitty first draft. If you’ve ever used ChatGPT, or Microsoft’s new Copilot in Word, you’ll have seen that for yourself. But it’s not the kind of shitty first draft that actually achieves anything. It’s not a shitty first draft that gets ideas out of your head and out of a tangled, jumbled mess of thoughts. It’s not the type of SFD that clarifies gaps in your own knowledge. An AI-generated SFD is all filler and no killer: a pointless exercise that might fill a blank page, but doesn’t achieve much more.

The Practical AI Strategies online course is available now! Over 4 hours of content split into 10-20 minute lessons, covering 6 key areas of Generative AI. You’ll learn how GenAI works, how to prompt text, image, and other models, and the ethical implications of this complex technology. You will also learn how to adapt education and assessment practices to deal with GenAI. This course has been designed for K-12 and Higher Education, and is available now.

Alternatives to the AI first draft

Instead of just complaining about GenAI like some sort of Chomskian rant, I think it’s important to acknowledge that these tools will make their way into classrooms and onto writers computers in a hundred different ways: we have to prepare students and novice writers to deal with them. While I don’t think that the AI first draft is the answer, I do have a few other suggestions for how the technology might be used.

I’ve recently started a series of posts which include activities based around a writing cycle, incorporating GenAI into the usual processes involved in composition. The third article in that series focused on the use of generative AI tools to generate ideas and to support students and writers with brainstorming. Examples in that post include activities that use AI to create highly contextualised brainstorms, image generation based on free writing, and activities encouraging students to dig deeper into their own ideas.

As an alternative to using AI to generate ideas, we should be encouraging students to talk and discuss their own unique ideas. Those conversations can be recorded through voice memos: the transcription of a discussion or a student’s monologue could be the first draft, and an AI tool could be used to refine and edit the transcript.

In fact, that’s exactly how this entire blog post was written, one Saturday morning, on a long, slow run, dictating the post into AirPods connected to an Apple Watch, and later transcribing the voice memo using Otter and tidying it up using a combination of ChatGPT and Mistral. Here are a few images from that process (except for the running part, because nobody needs to see me labouring my way down country roads talking to myself).

Conclusion

Using generative AI to create a first draft seems like an attractive way to beat the fear of the blank page, and a solution for students or writers who feel they’re lacking the skills to put pen to paper.

But it’s an incredibly heavy-handed use of the technology. It doesn’t make the best use of the tool or of the mind. It’s a shortcut, and shortcuts exist because of a system which doesn’t value the true purpose of writing. Using Generative AI will almost inevitably become part and parcel of writing in digital contexts, but the shitty first draft still belongs to us.

If you’d like to get in touch to discuss Generative AI, consulting services, or PD, please use the contact form below:

Go back

Your message has been sent

Warning
Warning
Warning
Warning.

9 responses to “The Myth of the AI First Draft”

  1. A great article aligning with a lot I have been warning my fellow English teachers about – we need to actively discourage/make really hard for students to outsource their first drafts to AI.

  2. Perhaps the emphasis should shift away from the writer’s perspective and more towards the reader’s experience. A writer’s passion for their craft and the writing process does not inherently guarantee a worthwhile output for the reader. Ultimately, let the readers themselves determine whether they find value in the knowledge generated by AI systems.

    Writing is a skill that requires cultivation, not an innate human trait. While not everyone possesses a natural aptitude for writing, a kind paraphrase of “not every person is capable of writing,” perhaps it is time we hear more ideas from people who live with dyslexia or simply do not possess the luxury of time that can be dedicated to the writing process. Their voices can be amplified through alternative mediums, including the recent Generative AI. By the way, in 2022, the OpenAI technology was already two years old, and other text generation systems were available for years before that for a fee.

    I agree that “writing is to explore knowledge, to connect and synthesise ideas, to create new knowledge, and to share.”. However, it is not the only way to do so. How did Socrates and many other non-writers create new knowledge? There are countless other ways.

  3. […] his post, Furze outlines a few reasons to “be cautious of the AI first draft.” Furze’s […]

  4. […] this instance, I decided to write an article. I have written before about my preference to use voice memos and speech-to-text to draft articles, and that’s how I […]

  5. […] The Myth of the AI First Draft […]

  6. […] The Myth of the AI First Draft Digital plastic: Generative AI and the digital ecosystem Artificial Intelligence and Teacher Workload: Can AI Actually Save Educators Time? […]

  7. […] The Myth of the AI First Draft by Leon Furze […]

  8. Just Passing Through Avatar
    Just Passing Through

    Great piece, but it is disappointing and bitterly ironic that the author chose to “illustrate” it with GenAI.

    1. Thanks for taking the time to comment. You’ll see that my own attitudes towards image generation have changed over time. In these (comparatively) earlier posts I regularly used Adobe Firefly to generate images. For most of the latter half of 2024 and into 2025, I have stopped using image generation for those purposes. I explained this in a post elsewhere (I forget exactly where…) but I now tend to use betterimagesofai.org, stock images, or my own images instead.

      That’s largely a personal choice as my own opinions and attitudes have changed over time but I feel it would be disingenuous to retrospectively change the older posts.

Leave a Reply