These Aren’t the Droids We’re Looking For: Move Along ChatGPT

purple and blue light digital wallpaper

I just checked my chat history and, other than recording demonstrations, the last time I used ChatGPT for anything serious was three months ago. Since then, every interaction I’ve had with OpenAI’s infamous chatbot has been either a direct comparison with other applications or throwaway prompts like “can you hum a II-V-I in G major?”

Screenshot from ChatGPT mobile app showing prompt "can you hum a melody for a 2-5-1 in G major?" and the response "Absolutely I can do that!..."
Don’t ask.

That’s because, despite still being the world’s most used large language model, ChatGPT has become a very unserious application.

There are many things at play here: market forces, overextension, and the at times laughable and at times terrifying pursuit of money.

OpenAI promised a lot, and in the years since ChatGPT’s release, what they offer has become steadily worse and worse. I’m not alone in my abandonment of OpenAI’s flagship product. On social media, you’ll find hundreds of people like me who have been using and researching language model-based applications, who have basically given up all hope of OpenAI redeeming themselves.

This article looks at some of the ways in which Sam Altman’s company has fallen from grace and fallen behind competitors like Google in the AI arms race.

What OpenAI Promised

Rereading the November 2022 blog post introducing ChatGPT, there’s almost a naivety to OpenAI’s language about its imminent model. In the original post, OpenAI speak of the limitations, problems with generating plausible text, and their planned methods to reinforce their “iterative deployment of increasingly safe and useful AI systems”. There are many examples and an almost apologetic tone when discussing the “excessively verbose” and occasionally “harmful and untruthful” GPT-3.5 model. But as 100 million users unexpectedly piled into the chatbot, any sense of OpenAI’s naivety and humility quickly evaporated.

In March 2023, GPT-4 represented a significant step forward, mostly as a result of increased training data and reinforcement learning through human feedback provided by those millions of early users. Sam Altman, CEO of OpenAI, enthused about the capabilities of GPT-4, whilst simultaneously downplaying its capabilities and boldly claiming that future versions would be more intelligent than humans across every domain.

The rhetoric of superintelligence has vied for top position with the language of safety and transparency in all of the company’s media since 2023. OpenAI, founded as an open-source alternative to Google’s monopoly on artificial intelligence research, promised the world. But has the company delivered on any of those promises of superintelligence, safety, transparency?

Of course not.

What OpenAI Has Delivered

If I reflect on the last few years, the transition from GPT-3.5 to GPT-4 in March 2023 felt the most promising and most exciting advance in OpenAI’s recent history. The increase in mathematical reasoning, the addition of advanced coding capabilities, and the increased multimodality of the model made it feel like a significantly different platform.

GPT-4 felt less like a chatty chatbot and more like a serious application for getting things done. I remember for the first time being able to use it as part of my website design, for producing automation scripts in Python, for editing writing in a way which wasn’t possible with GPT-3.5’s heavy-handed approach to language. In the first couple of months following its release, I remember testing it on more complex maths and being pleasantly surprised at its capabilities.

But I also remember that by about August of 2023, things started to feel different again: harsher rate limits, answers which felt less sophisticated than they had a few weeks ago, and a general sense that GPT-4 was degrading over time. I wasn’t alone in these feelings, and many commentators suspected that the more powerful GPT-4 was simply costing OpenAI too much money. ChatGPT has never been profitable, and with every advance – from 3.5 to GPT-4 to the addition of reasoning models and products like Deep Research – has meant that OpenAI burns more cash. Many features of the platform between 2023 and 2025 have followed a similar trajectory.

Coding capabilities degraded swiftly, with many examples of incredibly lazy coding from the bot: commenting out important lines, truncating large chunks of code, refusing to write extended passages, and inserting caveats for the user basically suggesting “do it yourself”.

#insert some code here I guess

Advanced Voice Mode blew people away with its realism, but despite being a feature with huge potential, has become more of an irritating novelty than a useful part of the application. Firing up Voice Mode today is an exercise in inane, circular conversations underpinned by a model which is clearly not as sophisticated as the one used for standard text inputs.

OpenAI’s image generation, eventually rolled into ChatGPT, has similarly failed to impress time and time again, with its models lagging behind competitors like Midjourney and Google, and plagued by dramas such as the Studio Ghibli incident of [insert date here] and the bizarre tendency of ChatGPT images to take on a tone disparagingly referred to by many online as the “ChatGPT piss-stain”.

The fox is charming, sure, but why can’t ChatGPT render “white”?

On the theme of multimodality, OpenAI also promised the world with its Sora video generator, and while impressive, it has since been superseded once again by competitors like Google. In response, the company released its Sora 2-based social media AI video generation app into the US market, where it promptly dissolved into a mire of racist, misogynistic, and otherwise discriminatory videos, which the company could probably have foreseen given its apparent focus on safety and its millions of data points of how users online interact with artificial intelligence (that is to say, mostly like idiots).

https://www.bbc.com/news/articles/c5y0g79xevxo

And those broken promises about safety are probably the biggest disappointment of OpenAI: from Sora’s racist videos of Martin Luther King Jr to CEO Sam Altman’s announcement that ChatGPT would allow users to create explicit content, to the multiple reports of users encouraged to self-harm and suicide by the model, it seems as though safety has taken a back seat while profits take the wheel.

And yet the company marches on, proudly advertising free accounts to students and now teachers across the US, a brand-new “certification” course, and increasing attempts at market capture and political lobbying, including an Australian partnership with Sydney-based NEXTDC to build so-called “Australian sovereign AI”.

These tensions have ripped OpenAI’s chatbot apart: clamouring for infinite scale whilst bleeding money, desperately seeking higher and higher valuations from venture capitalists, investments from governments, or the “trillions of dollars” apparently needed to reach artificial general intelligence. The profit motives of OpenAI, pushed along by Sam Altman – who we must remember has his roots in venture capital – cannot be reconciled with the founding mission statement of the company.

The idea of free ChatGPT access in education cannot be reconciled with the idea of sex bots and explicit conversations.

The obvious manoeuvring towards ChatGPT as a shopping assistant, which will surely be accompanied in the near future by hyper-personalised advertising, cannot be reconciled with claims to superintelligence.

The closed loop of money changing hands between OpenAI, Microsoft, Nvidia, and a handful of other companies will surely lead to the bursting of the AI bubble before it leads to the free, prosperous, democratic economy espoused by the so-called effective altruists of the Valley.

We were promised safe, transparent AI that would advance humanity. We got a shopping chatbot that you can have sex with.

Like I said, I’m not alone. This post was prompted in part by an earlier post I made on LinkedIn, and in part by posts from others like Mike Kentz and Maria Sukhareva who have commented on the hopeless coding skills and generally tedious user interface changes from OpenAI’s flagship chatbot.

I’m not the only one hanging out on LinkedIn yelling at clouds…

Where to Next?

I don’t think it’s possible to justify the use of OpenAI products in or out of education. In my presentations, workshops, professional learning, and online courses, I’ll generally demonstrate the best available platforms for the task at hand. That used to be ChatGPT, even with the caveats of their many ethical problems as a company.

That’s no longer the case. Gemini 3 Pro, Claude Opus 4.5, and Chinese competitors such as Kimi-K2-Thinking excel against ChatGPT in most tasks, from writing to maths to coding, and models like Nano Banana and Veo 3.1 for image and video generation. Not only is ChatGPT increasingly problematic for OpenAI’s ethics, it’s increasingly lagging.

This is not me in a Melbourne cafe eating breakfast, but you’d be forgiven for thinking so. Image generators like Nano Banana Pro (Google) have outstripped OpenAI’s efforts by a long distance.

In sessions with schools and universities, people ask “which AI should I use?”, and ChatGPT is now generally near to the bottom of the list of recommendations. If you want to build a simple website, use Google’s AI Studio, Gemini 2.0 Pro, or Claude models. If you want to edit writing, use Claude. If you want to do some simple AI-powered internet searching, use Perplexity or Gemini.

At the end of the day, use what you want. You won’t get any judgement from me, but as far as I’m concerned I can’t see a good reason for sticking with ChatGPT.

Subscribe to the mailing list

As internet search gets consumed by AI, it’s more important than ever for audiences to directly subscribe to authors. Mailing list subscribers get a weekly digest of the articles and resources on this blog, plus early access and discounts to online courses and materials.

No data sharing, no algorithms, no spam. Unsubscribe any time.

Processing…
Success! You're on the list.

Leave a Reply