Advertising in ChatGPT: Should We Care?

working macbook computer keyboard

Let me start by saying that the last time I used ChatGPT for “serious work” was… [checks history]… about four months ago. Everything since then has been a comparison with other models, or a demonstration for teaching purposes.

Claude and Gemini have entirely replaced OpenAI’s chatbot for my day-to-day work, and given OpenAI’s increasing encroachment into education and politics, I’m happy to keep it that way.

It also means that I didn’t care – nor was I particularly surprised – when OpenAI finally announced that they’re going to start running ads in ChatGPT.

But you should be paying attention, even if, like me, you’ve given up on Altman’s LLM.

Example of what the first ads might look like. Image source: OpenAI

Why is OpenAI starting ads?

You can dress this up anyway you like. OpenAI, in their blogpost announcing the new ads, claim that it’s some sort of altruistic endeavour to make glorious AI accessible to all. But, as a colleague of mine used to say, “it doesn’t matter how many times you weigh the pig: it’s still a pig.”

And this particular pig smells like money. Money that, reportedly, OpenAI doesn’t have and desperately needs.

Most people who’ve had any interest in AI over the past few years have been waiting for this moment. It’s the freemium model of enshittification that every tech since social media has followed as doctrine. Offering ChatGPT for free to all users was quickly followed by “Plus” accounts for $20USD/mth, and then “Pro” subscriptions for $150USD/mth.

But even these subscription models can’t make up for the enormous pile cash that OpenAI is reportedly burning through. The company spends more than it earns, and is reliant on venture capital funding to survive. To continue to outpace their cash burn, OpenAI needs to convince investors that they have another way to generate revenue.

Enter, advertising.

I don’t often use AI to make images for the blog any more, but when I do they’re exclusively weird images of chumbuckets. Image source: Google Nano Banana Pro. “Chumbucket” is a word I learned in 2025.

How would advertising change ChatGPT?

OpenAI has been careful to frame their advertising rollout as benign. The company announced five principles guiding their approach: mission alignment, answer independence, conversation privacy, choice and control, and long-term value. They promise that ads won’t influence responses, that they’ll never sell user data to advertisers, and that they won’t optimise for time spent in ChatGPT.

These are reassuring words. But we’ve heard them before.

The initial ad format will appear at the bottom of responses when there’s a “relevant sponsored product or service based on your current conversation.” Ads will be clearly labelled and separated from organic answers. Users under 18 won’t see ads, and advertisements won’t appear near “sensitive or regulated topics like health, mental health or politics.”

But here’s where things get murky. According to The Information, internal OpenAI discussions have included giving sponsored chatbot results “preferential treatment” over non-sponsored results. One example floated: a user asking about headache relief could receive a promoted ad for Advil in the response, potentially burying the actual dosage information under sponsored content.

Fidji Simo, OpenAI’s CEO of Applications, has explicitly stated that “ads will not influence the answers ChatGPT gives you.” But we’ve all watched Google Search deteriorate from a tool that delivered relevant results to one where you scroll past sponsored content just to find what you’re looking for. The gravitational pull of advertising revenue is difficult to resist when you’re burning billions of dollars every quarter.

Sponsored ads top all of the search results in Google, whether in the form of visual adverts or text links.

What has OpenAI learned from social media and search?

The short answer: everything. And that should worry you.

Social media and search advertising are built on a simple premise: collect data about users, build detailed profiles, and use those profiles to serve targeted advertisements. Meta generates around 98% of its revenue from advertising. Google’s advertising business brought in $264.59 billion in 2023 alone. These aren’t companies that sell products… they’re companies that sell you to advertisers.

The mechanics are well-established. Facebook and Google collect vast amounts of user data: demographics, interests, behaviours, location, browsing history, and even things you merely pause on while scrolling. Then they use sophisticated algorithms to infer additional information about you. These algorithms can “automatically infer demographics like age, gender, and race as a side-effect of looking for patterns in the data, even if there was no intention to infer them.

This is the model OpenAI appears to be adopting. Their announcement explicitly mentions showing ads based on “your current conversation”, which is precisely how targeted advertising works on other platforms. The difference is that ChatGPT starts with a significant advantage over its predecessors.

Another example of a possible interface. Image source: OpenAI

Is OpenAI’s data even more valuable than social media?

Social media companies spent decades developing increasingly sophisticated methods to infer user data. Facebook tracks which posts you linger on. Google monitors your search history. Both platforms use complex machine learning systems to predict your demographics, interests, and purchasing intent from fragmented behavioural signals.

ChatGPT doesn’t need to infer anything. Users simply tell it.

Think about what you’ve shared with ChatGPT. Perhaps you’ve asked for advice about a medical condition. Discussed relationship problems. Sought help with work challenges that reveal your job, salary expectations, or career frustrations. Requested recipes based on dietary restrictions. Asked for gift recommendations based on your partner’s interests.

A Stanford study on chatbot privacy concerns highlights exactly this risk: “If you share sensitive information in a dialogue with ChatGPT, Gemini, or other frontier models, it may be collected and used for training, even if it’s in a separate file that you uploaded during the conversation.”

The study’s lead author, Jennifer King, offers a concrete example: imagine asking an LLM for dinner ideas and specifying that you want low-sugar or heart-friendly recipes. The chatbot can draw inferences from that input, and the algorithm may classify you as a health-vulnerable individual. “This determination drips its way through the developer’s ecosystem. You start seeing ads for medications, and it’s easy to see how this information could end up in the hands of an insurance company.”

Research in mid-2025 found that nearly three-quarters of teens have used AI companions, with a third choosing AI bots over humans for serious conversations and a quarter sharing personal information with them. Users feel a false sense of confidentiality with chatbots: the conversational interface creates intimacy that traditional platforms never achieved.

Where Facebook had to painstakingly build profiles from thousands of data points and behavioural signals, ChatGPT users are handing over the complete picture in a single conversation. Medical histories. Financial situations. Relationship dynamics. Career aspirations. Personal beliefs. Mental health struggles.

OpenAI’s promise to “never sell your data to advertisers” may be technically true while remaining deeply misleading. They don’t need to sell the data: they can use it to target ads with unprecedented precision while keeping the raw data in-house. The advertising model doesn’t require selling personal information; it requires using personal information. That’s the entire business model.

What this means for education

So what happens when advertising meets education?

OpenAI has been aggressively pursuing the education market. In November 2025, they launched ChatGPT for Teachers, offering free access to verified US K-12 educators through June 2027. They’ve partnered with 16 school districts representing nearly 150,000 teachers and staff, including major systems like Houston Independent School District, Dallas ISD, and Fairfax County Public Schools. They’ve also signed deals with the American Federation of Teachers, pledging $10 million over five years for AI resources in education.

At the university level, Bloomberg reports that OpenAI has sold more than 700,000 ChatGPT licences to about 35 public universities. The California State University system alone is paying $15 million annually to provide ChatGPT to 500,000 students and faculty. Here in Australia, the company has partnered with La Trobe University and UNSW, among others.

For now, the protection seems clear: ChatGPT for Teachers is “ad-free” with “education-grade privacy protections.” Users under 18 won’t see ads. ChatGPT Plus, Pro, Business, and Enterprise tiers remain advertisement-free.

But here’s the problem: most students won’t be using paid versions.

The free tier – the version most students will access when they’re doing homework at home, studying for exams, or working on assignments outside school hours – will feature advertising. The “Go” tier, at $8/month, will also include ads. Only the Plus tier at $20/month and above remains ad-free.

What happens when a student asks ChatGPT for help understanding photosynthesis and receives a sponsored response from an educational technology company? When they’re researching career options and get advertisements for particular universities? When they’re seeking help with mental health struggles – remember, ads won’t appear near “sensitive topics” – but everything adjacent to those struggles becomes fair game?

And the June 2027 deadline for free teacher access is telling. “After that date,” OpenAI notes, “pricing may change.” The freemium model of enshittification follows a predictable pattern: offer something valuable for free to build a user base, then gradually extract value once users are locked in. Every technology platform from Facebook to Uber has followed this trajectory.

School districts signing multi-year agreements with OpenAI today may find themselves locked into an ecosystem that looks very different by the time those contracts come up for renewal. The data being collected now about student learning patterns, knowledge gaps, interests, and struggles will persist in OpenAI’s systems. The goodwill being built with “free” offerings will translate into bargaining power later.

Sam Altman himself acknowledged in 2024 that advertising would be “a sign of desperation.” Now, facing projected losses of $74 billion in 2028 alone and potentially running out of money within 18 months without massive capital infusions, that desperation has arrived.

Education has always been a target for technology companies seeking long-term user acquisition. Get students using your products while they’re young, and you’ve built customers for life. Google understood this with Chromebooks and Google Classroom. Meta understood it with Facebook. Now OpenAI is executing the same playbook with generative AI.

OpenAI’s stated mission is “to ensure AGI benefits all of humanity.” But when a company loses $13.5 billion in six months and responds by introducing advertising, it’s worth asking whose benefit is really being served.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨

Warning
Warning
Warning
Warning
Warning.

Leave a Reply