The ChatGPT diaries: How I actually use ChatGPT

Since its release in November 2022, I’ve used ChatGPT a lot. I’ve also seen a lot of advice on how to use ChatGPT – some good, some bad, some totally useless. I’ve also seen a gradual increase in “educational chatbots” which are basically flashy websites built on top of OpenAI’s GPT model.

Other than image generation, I’ve never really found much cause to use apps beyond ChatGPT. I’ve obviously played around with internet connected models like Bing, Bard, and Perplexity, but I still rely on good ol’ fashioned Google search for most of those purposes.

Now that image generation is also part of ChatGPT via the DALL-E 3 model, and given Adobe Firefly has already pretty much replaced Midjourney for me, I find myself using fewer and fewer apps.

The live webinar Practical Strategies for Image Generation in Education is now available as a recording. Click here to purchase the recorded webinar.

In this post, I’m going to show exactly how I used ChatGPT in a one week period. Rather than setting out to record my use and perhaps influencing how I use it, I downloaded all of my data and went back through a period about two weeks prior to writing this post. You’ll see exactly how I did that at the end of the week.


Sunday was obviously a day for messing around with some new features: ChatGPT voice in the iOS app, and image recognition. I started off the day with a fairly inane conversation. Although the voice interaction was pretty clear, it did mis-transcribe a few words (e.g., “her coffee” instead of “a coffee”).

Then I saw a post floating around suggesting that ChatGPT’s image recognition could be fooled, and thought I’d try it myself. It didn’t work.

Fool me once, shame on you…


On Monday, I just used ChatGPT to generate a few images for upcoming blogs and social media posts. In the first picture, I copied a paragraph from a draft blog post and used it to generate a few ideas.

In the second, I created the image for this LinkedIn post where I collated some of the cutting-room floor pieces of writing from my PhD.


On Tuesday I had some free time and got a little more use out of ChatGPT. First, a voice conversation (hence the long, rambling transcript) to start developing some AI Policy tools for schools and universities. Then, I took some work I’d already done on GAI and “cheating” and used Advanced Data Analysis to compile the work and turn it into a word document (which I then posted on LinkedIn here).

I also used image recognition to first “read” and then create some variations on the main images from my Practical Strategies for ChatGPT in education post, since I’m updating it and starting to put together an online course.


On Wednesday, I used ChatGPT plus Bing search to back up a few arguments. This is about as close to using it as a research tool as I get, and this was for some social media posts. I still find that Google search and the university EBSCO library are more trustworthy research tools.

Back to the online course I’m working on, and I decided it was time to bite the bullet and make some videos which include my actual face. I live in a small cottage on a farm with three kids (and a cat, pictured sitting in my chair), and my office is my porch – not an ideal film studio. I used ChatGPT’s image recommendation to see if it had any suggestions.

It made a series of reasonable suggestions, and I ended up clearing off the antique writing desk and using the greenery out of the window as the backdrop. Then I tidied the porch a bit.

Still on Wednesday, I created a series of images for some LinkedIn posts about why I think chatbots are an educational dead-end. I found it very capable of producing cohesive images in a theme. I also created some header images for another blog post, and revisited some images from an earlier post.

Finally, because it had obviously been a busy day, I used image recognition to make some dinner suggestions. I ended up making papoutsakia with the eggplant, if you’re interested.


During the day on Thursday I was tied up doing some actual studies, but in the evening I had a play around to see if I could use the image generation and Advanced Data Analysis together to create a simple browser based point and click game.

Short answer: no.

Longer answer: I generated the pixel art just fine, and cleaned the images up a little in Photoshop. Then, I imported them into Advanced Data Analysis and it actually gave me a half-working game. You get the cottage exterior as a background image, the key is placed randomly and when you click on it, it takes you inside.

That’s the theory, anyway. As you can see from the screenshots, it took a lot of cajoling. I tried various tricks that I’ve seen elsewhere to make prompts more successful. First, I tried “take a deep breath” which, believe it or not seems to actually work.

That got me half way there, and then I switched track to a roleplay prompt and told the model “imagine you are a professional games designer named Jules. WWJD?”. That actually did work, and the items in the game became clickable. Then I got tired and gave up.

Before going to bed I cranked out a couple more images for a recent blog post on why I think GAI generated content online might be like digital plastic.


Again, on Friday I found myself doing actual work and study, and didn’t use ChatGPT much at all. I did, however, churn out a really quick blog post about using ChatGPT to create a synthetic, human-free podcast.


End of the week, and time to pull this post together. I exported my data from ChatGPT in the settings, receiving an email with a zip file containing all of my chat history. I then imported that zip file into Advanced Data Analysis and asked it to extract data from an earlier week. I just wanted the titles of the chats and the date/timestamp so that I could quickly find them in the chat history.

Again, like with every time I’ve used Advanced Data Analysis, it sort of worked. The data it gave back was full of oddities and it really struggled to understand the formatting, which is kind of ironic given where the data came from. Eventually, I got a table that had the information I needed.

The week in review

So, what did I actually use ChatGPT for? Well, I didn’t use it for research, or to create a million PowerPoints, or to become a millionaire. I guess that means I’m using it wrong. Here’s what I did use it for:

  • Testing the new voice, image recognition, and image generation features
  • Generating a lot of images for blog and social media posts
  • Image recognition to redesign my office
  • Image recognition to choose a recipe
  • A little fact-checking for a series of articles
  • A failed attempt at making a retro game
  • A successful attempt at analysing my chat history to write this post

One thing I hope you take away from this post is that there’s no magic here. There’s no secret “prompt engineering” skills or hacks. There’s just me, using ChatGPT when I feel like it and interacting in whatever way is needed to get the job done.

And that’s what I suggest to anyone looking to learn how generative AI works. Find a few jobs. Break them down into tasks. Try to express the task in clear, natural language. There’s your prompt.

If you’ve enjoyed this post, please share with the buttons at the end of the page. If you’d like to get in touch to discuss GAI, please use the form below:

Leave a Reply

%d bloggers like this: