OpenAI just released a feature that has been on the horizon for a little while: Study Mode. Study Mode, according to the OpenAI website, offers “a new way to learn in ChatGPT that offers step by step guidance instead of quick answers.”
In this post, I’ll run through the features of Study Mode, how students might access it, and ask some questions about the supposed “pedagogy” behind the approach.
What is ChatGPT Study Mode?
The following synopsis of features is based on the official OpenAI post launching the product, as well as some of my early impressions using Study Mode this morning:
- Interactive prompts: Study Mode combines “Socratic questions”, hints and self‑reflection prompts so learners are encouraged to work through ideas rather than receiving immediate answers. We have seen this approach before from companies including Google and Magic School. “Socratic questioning” is often offered as a default solution to LLMs giving students direct answers.
- Scaffolded responses: Explanations are broken into short, dot-point sections that highlight connections between concepts and follow-up questions. There appears to be a liberal sprinkling of emojis throughout, in keeping with the general style of GPT-4o responses for a while now.
- “Personalised feedback and support“: According to the website, Study Mode adapts depth and difficulty to each student’s skill level, using quick diagnostic questions and chat memory to keep guidance on target. In my use of the application, I noted that it was adding to memory much more frequently than in general conversations. This might cause problems for users who multitask with ChatGPT since memories are known to bleed across conversations.
- Knowledge checks: Study Mode regularly inserts quizzes and open‑ended questions to attempt to gauge understanding before moving on. In my tests, it even asked multiple choice questions before I had told it what I was studying, and what level of schooling I was in.
- “Flexibility”: You can switch study mode on or off at any point, letting you move between Study Mode and regular ChatGPT interactions. This is an interesting one: of course, OpenAI isn’t going to limit access to its main product. But I wonder if down the track there will be ways to “force” users into study mode, for example in the integration with the Canvas LMS. Otherwise, there’s nothing to stop a frustrated (or bored) student just switching to the better option.

Testing ChatGPT Study Mode
In my first test I asked a simple Literature vocabulary question: help me to understand synecdoche. The immediate response was to bombard me with questions and then, without leaving space for a response, fire a sort of pre-test knowledge quiz at me.
According to the website, “study mode is powered by custom system instructions we’ve written in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning.” Apparently, “wait for the student to answer a question before asking three more” isn’t part of the instruction set.

After clarifying that I am a Year 11 student studying VCE Literature, ChatGPT added that fact to its memories (which I’ll have to clear manually before I continue other tasks). It then proceeded to offer comparisons, more examples, and a repeat of the multiple choice question.

Like any good student, I wondered aloud “what’s the point?” See also: “when will I ever use this in the real world?” and “I have to go to the bathroom.”
ChatGPT, like a good little sycophantic robot, rewarded my petulance with praise, letting me know that all strong Literature students ask these kinds of deep and meaningful questions. LLM sycophancy is a well-known issue, and one which OpenAI in particular has suffered from in recent years. In a “Study Mode” chatbot, it presents additional challenges since sometimes we want more friction in the learning experience, not less.

Tackling Complex Challenges
The model has been trained to tackle complex challenges “step by step”, which, alongside “Socratic Questioning” appears to be one of the core pedagogies of the chatbot.
In the following example, I’m playing the role of an uncertain student (not really, I genuinely have no idea what any of this means) and trying to tease out more and more discrete steps to solve a Specialist Mathematics problem:
At this stage in my experiments I noticed something interesting though: it is possible to select different models within Study Mode. There are some serious implications to this.
Students are, theoretically, able to access study mode in a variety of ways. According to the website the product is “available to logged in users on Free, Plus, Pro, Team, with availability in ChatGPT Edu coming in the next few weeks.”
But a student using the $20USD per month “Plus” version has access to the o3 reasoning model, and turning that mode on give qualitatively different results. In the video above, I’m using o3. After “thinking” about the problem, the model decides to give a short response prompting me to remember what I’ve already studied about related topics.
With GPT-4o enabled – the default option and the model used in the free version – the output is very different (click images to enlarge):



As you can see, despite the system instructions to avoid answering questions and break down topics step-by-step in a way that encourages reflection, using the 4o model study mode just… answers the question. It does break it down step by step, but the response is basically the same as the same prompt used with study mode turned off:

Shortly after I posted the article, Shern Tee commented on LinkedIn with his “maths teacher hat on”, and the comment was articulate and helpful enough that I need to include it here:

What’s The Point?
Coming back to my question about synecdoche: what’s the point?
Ostensibly, the point is to help learners to use ChatGPT in a way that will not lead to the dreaded cognitive offloading, brain rot, or whatever other fear narrative we’re being offered this week. Study mode will stop students’ brains from dissolving into pudding, because it doesn’t answer the questions for them (except for when it does).
In reality, any student using GenAI to answer questions is equally likely to turn study mode off, or just use another platform entirely. There could be a feature in future integrations – such as the Canvas LMS partnership – to force students to use study mode. Similar things are already being trialled in places like New South Wales and South Australia, with government-built chatbots that are offered to students as helpful alternatives.
But none of this is the point of study mode.
OpenAI’s move signals two things: (1) the company recognises that its largest user base is students, even if they don’t really bring in much cash and (2) it is reclaiming territory from the dozens of niche “Socratic chatbot” wrappers built on its own API. Companies like Magic School should probably be worried. Departments of Education that have spent years (and tens of thousands of dollars) building their own GPT-based chatbots should possibly be concerned.
But is this shift good for teachers and students in the long run? Policy momentum suggests the question may already be answered for us: governments in the UK and US are trialling ChatGPT-powered platforms, while ed-tech giants such as Canvas LMS are embedding OpenAI models directly into their products.
The real test will be whether Study Mode actually supports deep learning rather than shallow answer-hunting, and whether students will actually bother using it.
At the very least, it can still explain what synecdoche means. My Literature students would be delighted.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply to ChatGPT学习模式的第一印象 – 偏执的码农Cancel reply