Prompt Whispering: Getting better results from ChatGPT

This article was originally posted over on LinkedIn.

If you haven’t heard about ChatGPT in the past week, then your LinkedIn feed looks dramatically different to mine. Probably due to the nature of the people I follow, and the algorithms lurking under the hood, the latest experiment from OpenAI has almost entirely filled my feed, Chrome suggestions, and inbox. I don’t mind. I’m enjoying the exploration and testing, and the range of opinions about ChatGPT from “it’s a bullshit machine” to “this will kill Google search”.

Whatever the truth of ChatGPT’s capabilities, it’s definitely gone mainstream. Articles from The AgeThe GuardianThe Times, and The Atlantic have popped up like mushrooms across the internet. I’ve obviously got a bias to those about education, including Will ChatGPT Kill The College Essay? and AI bot ChatGPT stuns academics with essay-writing skills and usability. A focus of articles like these has been the threats to education – particularly tertiary – and the potential for cheating. This will certainly be of huge importance. The education system is a slow mover, and the chances of it catching up with the pace of change are slim to none.

I’ve turned this post – Practical Strategies for ChatGPT in Education – into a live webinar for February the 16thClick here for details

I’m excited, though, about the potential of these technologies to disrupt education. Back in June – or about 267 years ago in AI years – I wrote a post about the potential impact of AI in the secondary English classroom. In it, I outlined some possible scenarios from reactive (banning and blocking the technology, and issuing all essays under exam conditions) to optimistic. At the most optimistic end of the scale, I can see the essay losing its position on the pedestal as the primary form of knowledge assessment. Writing still has a place, but it sits alongside discourse and debate. AI writing becomes integrated into education and sits alongside other tools to support learning.

In order for this to happen, teachers and educators need to get to grips with the capabilities of the technology. After spending a week working with ChatGPT, and a few months with earlier models, I’ve learned a lot about getting the most out of the AI writer.

Writing better prompts

The best way to learn how to work with ChatGPT is to get in there and experiment. It can help, however, to use a few basic principles to guide your writing. I shared a post recently from Ben Whately where he gave three basic principles for writing quality prompts: be specific; chunk the work; and get it to improve on its own output. What I really liked about the post was the observation that prompting GPT is “a lot more like teaching than it is like conventional programming.”

Building on that idea, here are seven ways to “teach” ChatGPT to provide more interesting, accurate, and sophisticated responses.

Here’s the overview:

  1. Be Precise
  2. Avoid the efficiency trap
  3. Check the facts
  4. Iterate and improve
  5. Role play
  6. Remind, remind, remind
  7. Fill in the gaps

1. Be precise

ChatGPT can’t think, and it can’t second guess what you want it to produce. If you provide a generic prompt, you’ll get a generic response. This extends to any form of writing including essays, fiction, and advertising copy. The more detail you can provide, the better the response you’ll get back. For example, if you provide a prompt like, “Write an essay about Pride and Prejudice,” you’ll almost certainly get a variation on a five paragraph essay with very broad generalisations about the text, themes, and characters, like this:

output from chatGPT showing a bland Pride and Prejudice essay response

But if you start to add detail to your prompt, you’ll end up with much more specific results. Consider the following prompt:

Write a 700 word essay about the following topic: ‘how does Jane Austen view the sanctity of marriage in preside and prejudice’? Academic tone. Use short, inline quotes from the novel as evidence and explain the quotes without saying “in this quote”. Sophisticated responses, graduate or postgraduate level. Focus on the author’s craft.

This includes word length, topic, notes on tone and style, and specific requests on how to incorporate evidence. It still won’t necessarily get all of that right, but you’ll be off to a much stronger start:

Output from chatGPT showing a more detailed response including quotes

2. Avoid the “efficiency” trap

These models are trained on enormous sets of data and vast libraries of text. This can make them incredibly efficient at predicting language output. The problem with efficiency, however, is that it can be incredibly boring. If you ask ChatGPT to write a story, for example, it will race through in the most direct path it can find to get from beginning to middle to end:

Output from ChatGPT showing a basic story about a cat

To get better responses, you’ll need for follow Ben Whately’s advice and break the prompts into more discrete chunks. Slow down the prompting process and as with point one be as precise as possible:

Output from ChatGPT showing a more detailed opening to a story about a cat

3. Check the facts

GPTs are notorious for fabricating information. Arvind Narayanan and Sayash Kapoor called ChatGPT a “bullshit generator” and Meta’s Galactica was pulled down quickly after launch because of the potential for generating massive amounts of pseudoscience and false knowledge. The reason behind the nonsense-generation stems from the way GPTs construct their outputs. The GPT isn’t thinking rationally, nor is it capable of fact checking its own output.

As mentioned above, it can’t reflect, and it is basically stitching language together by predicting the next most probable word. In Ben Thompson’s excellent article on the limitations of ChatGPT he illustrated this by posing the GPT a question about Thomas Hobbes for his daughter’s history homework. Part way through the response, the AI conflates Hobbes with John Locke, probably because the two are frequently mentioned together in texts that form part of the vast OpenAI dataset.

The biggest problem is how convincing the errors are. Take this example:

Output from ChatGPT showing an essay with plausible but incorrect references
A plausible but incorrect reference list from ChatGPT

As legitimate as these references appear, the academic literature is a fabrication. There is a text called Jane Austen’s Philosophy of the Virtues, but it was written by Sarah Emsley and published in 2005. David Monaghan is a prolific Austen scholar, and ChatGPT has cobbled together these two facts into a fake reference.

When writing ChatGPT prompts, be specific in your requests for evidence, and always require the AI to provide a reference list so that you can manually check each source.

4. Iterate and improve

In the above example, I first provided ChatGPT the following prompt: Write an essay about Jane Austen’s views on marriage. Use real sources and provide citations in APA7 with a reference list at the end. Do not fabricate references. All references must be public access and available online.

The essay it produced contained references to Jane Austen’s texts, but not to any other literature. That’s why in the example for point three, I used the expression “rewrite the above essay”. Often, ChatGPT will act in unexpected or unwanted ways. Be prepared to iterate through multiple prompts to get to where you want. Unlike previous apps using the model, ChatGPT has a limited memory function which makes it possible to enact these changes. You can also paste the entire previous result as part of the prompt and ask it to adjust as required.

5. Role play

One of the first things that “the internet” did to ChatGPT was find ways to jailbreak it, causing it to act in unintended ways. This included tricking GPT into giving detailed instructions on how to hot wire cars, or for producing other such risky content. The most common of these approaches was very simple: ask it to play a role.

Asking ChatGPT to role play (or act, or pretend, or conduct a thought experiment) can also be a useful tool for crafting your prompts. If you’re looking for a specific style of outcome, or trying to target a particular audience, then asking it to role play adds an extra layer of interest to the output. For example, compare the following:

Output from ChatGPT showing a generic description of marriage
Output from ChatGPT showing a description of marriage int he style of Jane Austen

We’ve hardly conjured the spirit of Jane Austen here, but the result is a lot less “Wikipedia entry”, and might make for the basis of a more interesting piece of writing. There are even a couple of parts which do align with the real life Jane Austen’s views on marriage, including the ups and downs and the need for a deep commitment and mutual respect.

6. Remind, remind, remind

ChatGPT has a “memory”, unlike many other generative language models. It’s this function which means you can “teach” the model to correct errors, fix issues with its tone and style, and hold something like a realistic conversation.

But the memory only extends so far, and the tendency of ChatGPT is to “forget” certain aspects of your prompt as it drifts back to its default style. This means that if you ask it to role play Socrates, for example, it will begin by assuming its approximation of the philosopher’s world view, but will eventually lose the thread.

The simplest remedy is to append a reminder to each prompt, just as “in your role as Socrates” or “because you are Socrates”. As the technology continues to advance, these memory issues will likely disappear.

7. Fill in the gaps

Finally, you might run up against another of ChatGPT’s inherent flaws: it doesn’t actually know everything. Anyone who has played around with the tech will have encountered the bland “as a large language model trained by OpenAI, I only have access to…” style response. The dataset only extends to 2021, so asking questions about anything post ’21 – or anything which isn’t included elsewhere in the massive dataset – will yield unimpressive results.

Output from ChatGPT showing it's inability to reference a film made after 2021

Because of ChatGPT’s ability to hold information, however, you can feed it information from other sources. This is similar – although on a much smaller and more simplistic scale – to training and fine-tuning GPT-3 on your own data.

After feeding ChatGPT information on Marcel the Shell With Shoes On (I have no idea…) from IMDB, Wikipedia, and Rotten Tomatoes, here is the second response:

Output from ChatGPT showing a detailed review based on the input text

Learning to teach ChatGPT

Even though it’s only been a week, I’ve learned a lot about working with OpenAI’s latest model. Mostly, I’ve stumbled across things by accident. I’ve also picked up some great tips just by reading a handful of the many articles filling my feed. As I said at the start of this post, the best way to learn how to teach ChatGPT is to get in there and start experimenting.

All of the prompt suggestions above overlap with effective teaching. Being precise with instructions, pushing students beyond the most “efficient” (or easy) answer, checking facts and providing contextual knowledge, and encouraging constant iteration and improvement are the hallmarks of good instruction. Sometimes, it’s even necessary to remind students to stay on task…

As an educator, working in both secondary and tertiary, I can see the potential threats posed by these technologies. But I can also see the great opportunities for working with these tools in new and exciting ways, and I think that we owe it to our students to dive in and explore.

If you’ve made it this far, well done. As a final demonstration, I asked ChatGPT to create a prompt for its cousin, DALL-E, to generate a self-portrait. The header image for this post is the result. Here’s the text:

Output from ChatGPT showing a prompt for a "self portrait"​ to be generated by DALL-E

Want to talk about advances in AI and the impact on education? Get in touch via the form below:

11 responses to “Prompt Whispering: Getting better results from ChatGPT”

  1. […] The website passed 1 million users in the first five days. For comparison, Instagram took about 2 and a half months to reach that many users, and Netflix over three years. Whatever your opinion on the quality of the writing – and that ranges from “it’s a wooden, clunky, bullshit machine” to “this is going to spell the end of human writers everywhere!” – ChatGPT and similar models are bound to disrupt the education system. As educators, we need to learn the capabilities and limitations of the technology. […]

  2. […] been playing and working with ChatGPT since its launch. I’ve written about prompt engineering, and the potential for ChatGPT to shift the narrative in education towards more human (or humane) […]

  3. […] 2.0”Socrates Against The Machine: Can Looking Back Help Us to Think About the Future of Education?Prompt Whispering: Getting better results from ChatGPTCan an AI critique human writing?A New Level of AI […]

  4. […] most powerful things you can learn about ChatGPT is how to write quality prompts. I put together a blog post in the “early days” of ChatGPT (a few weeks ago) on prompt engineering. In that post, I suggested 7 ways to craft better […]

  5. […] wrote a post not long after the release of ChatGPT, describing how better prompting leads to better results. It has been one of my most successful posts and joined a long line of […]

  6. […] Transfer in a few ways. For example, you could use the “role play” technique I’ve discussed elsewhere to aim for a particular […]

  7. […] other’s writing, but neglect to teach students the real editorial skills required. Using a role play prompt two students could use an AI model as an editorial “side kick” to help them learn the […]

  8. […] wrote a post not long after the release of ChatGPT, describing how better prompting leads to better results. It has been one of my most successful posts and joined a long line of […]

  9. […] Prompt Whispering: Getting better results from ChatGPT […]

  10. Great tips on prompt engineering for ChatGPT! I really like your advice on how to phrase prompts in a more conversational way to get better results. Asking follow-up questions and providing example responses is brilliant. I also appreciate the reminders to be polite, clear, and specific. Your prompt whispering guide has given me lots of ideas for how to improve my own prompts. Thanks for putting together such a helpful resource!

    1. Thanks glad you liked it!

Leave a Reply

Discover more from Leon Furze

Subscribe now to keep reading and get access to the full archive.

Continue reading