Earlier this week I posted my first ever web game: Real or Fake? It’s a game which, as you can guess from the title, asks you to spot the fake from 10 pairs of images.
This article contains spoilers for the game, so if you haven’t played it yet go and do that first! Once you’ve had a turn, come back here and read on…
Welcome back.
If you’re like most of the several thousand people who played the game in the first few days, you probably scored between 4-6 out of 10. So far, only a handful of people have managed the coveted 10/10 and they admit that they spent far too much time looking for tell-tale AI glitches.
In reality, whether we’re doom scrolling social media feeds or reading actual news online, we only ever spend seconds glancing over images.
We are officially at the point where AI has left the uncanny valley and entered the desert of the real.
Where did all the fingers go?
If you know anything at all about AI images, then you know they struggle with fingers and hands. It is such a notorious issue that it has quickly become a meme, with people suggesting you should wear extra fingers when you rob a bank so you can claim the CCTV footage is AI generated when you get caught…
But it looks like AI’s biggest problem has been more or less solved in new versions of image models such as Black Forest Lab’s powerful new model Flux.
Flux has now been built into X’s (formerly Twitter’s) AI, Grok. This means that paying X users can generate incredibly photorealistic images and share them instantly across the platform.
And they are.

Real or Fake? Making the AI Image Game
So I decided to make a little experiment. How realistic could I make my AI images using easily available commercial platforms without any editing?
First I fired up two powerful image generation applications:
Midjourney v6.1: https://midjourney.com
Flux via Grok: https://x.com/i/grok
Midjourney just (literally, a few hours before writing this) announced that users can generate 25 free images on their platform, making it much more accessible than previous versions which could only be accessed via Discord.
If you don’t want to use X (and are therefore a reasonable human) you can also access the powerful Flux.1[dev] model on Hugging Face here, or the faster but slightly lower quality Schnell version here.
To create the fake images, I used the same prompt in both platforms and then chose the one with the most realistic output. I worded the prompts to introduce deliberate issues such as over exposure and amateurish framing, for example:
“photo of a student night out, group of university students having fun, drunk, amateurish flash photography, 2004 canon-EOS digital camera, indoors, bar, over exposed flash”
I then went to Good Ol’ Fashioned Google Image Search and searched for something similar like “student night out” with the Creative Commons filter enabled.
The final two images are stacked up against one another in the finished game.

And that’s it. Nothing fancy. No complicated technologies. Just a couple of logins to two very easy-to-access platforms, and some basic prompts.
How to Spot a Fake, For Now…
There’s no foolproof way ti spot a fake. Some platforms still struggle with hands and fingers (Midjourney does, DALL-E, Google Imagen and Firefly are hopeless). For everything else, you need a fairly keen eye for things like:
- Shadows: which often come from indeterminate light sources, are the wrong length, or are attached to incorrect objects
- Smoke: which defies the laws of physics, emerges from strange places, or moves in unusual directions from the source
- Symmetry: especially around ears, but also anywhere else that you might expect a little more consistency
- Reflections: which frequently do not mirror as they should, or are placed oddly in the relative distance from the reflective surface
- Refractions: where light bends in unusual directions, causes strange lens flare effects, or other artifacts
Again, these are by no means foolproof. There are more complex methods of “image forensics” including watermark detection, metadata, and ways to digitally enhance and explore images, but most of those are out of reach for the average punter.
What’s clear is that we have entered a new phase of content creation online where, even compared to six months ago the realism of AI generated images has leaped ahead of what many of us can detect by eye.
Convincing multimodal deepfakes – AI images, audio and video which create a likeness of a real person – are close behind.
Get in touch if you’d like to discuss professional learning and consulting services for anything related to generative AI or digital strategy.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply to We Need to Talk About Deepfakes – Leon FurzeCancel reply