For the past few months, I’ve been visiting schools and hearing from students about why and how they’re using (or refusing) generative artificial intelligence. In this article, I’m talking about two of the more problematic perspectives that have emerged from those conversations: it’s good enough, and it’s better than me.
This blog post is the first in a series on student perspectives, where I’ll be exploring the questions, concerns and possibilities raised by students in K-12, higher education and vocational education.
It’s Good Enough
In discussions with school students across Victoria and New South Wales, the fact that ChatGPT (hands down the most commonly used platform by this demographic) is “good enough” came up time and time again. Senior school students in particular felt that ChatGPT was “good enough” to complete many tasks that they perceived as low stakes, including homework, classwork that wasn’t being directly assessed, and pretty much anything in subjects that they weren’t prioritising.
Students prioritise subjects for all kinds of reasons, but in my own state of Victoria, one of the primary reasons that senior secondary students prioritise is to maximise the scores in what they see as their most important ATAR subjects – the ones that contribute to their university entrance scores. ChatGPT, then, is good enough to handle the rest.
This feedback was particularly common when speaking with boys who were prioritising maths and science subjects. They saw ChatGPT as “good enough” to get their work done in English, a compulsory subject in Victoria and a mandatory part of the ATAR calculation. Pretty much every senior secondary English teacher will know that as a compulsory subject, you’re going to have a large number of students who aren’t there because they want to be there: they’re there because they have to be. Students like the boys I spoke to are content that the free version of ChatGPT does a good enough job on tasks like creative writing, language analysis and writing essays.
“Good enough” chatbots also came into play in conversations about other perceived low stakes tasks, again predominantly in writing. This included ChatGPT being good enough to write essays in the humanities (history, business, economics, geography and so on) and once again, in English. These types of comments seem to revolve more around take home tasks and anything not being assessed, and it was a common comment in the year 8 to 10 cohort.
Teenagers, in my experience, tend to be brutally honest, especially when they’re talking to somebody who isn’t one of their teachers. The brutal honesty from the students that I’ve been talking to suggests that if a student can’t see the point in what they’re doing, it is increasingly likely that they’ll offload it onto ChatGPT or a handful of other AI platforms. These students aren’t malicious, compulsive cheats. They’re young people with full lives and other things on their mind than Mr Furze’s year 9 English homework.
Try as we might to convince them that the task “finish those couple of paragraphs we ran out of time for in class before the next lesson” is vital to their education, we’d be fooling ourselves if we thought that task isn’t going to be completed by OpenAI’s chatbot.
It’s Better Than Me
I found this one much more insidious than the good enough reason. Many students that I spoke to, but particularly English Second Language (ESL) students and year 9 and 10 girls, responded that they used GenAI when they felt that it was “better than them”.
This wasn’t limited to ChatGPT or language tasks, though ChatGPT was again the most common. For ESL students, the flawless grammar and spelling of ChatGPT can be particularly alluring. It is very tempting, and strictly speaking not cheating, to answer questions in first language and then have ChatGPT translate them. The students feel that ChatGPT’s polished, if slightly predictable, prose is a better alternative than their nascent English language skills.
The logic also applied in other domains. Students were using image generation both as an alternative to traditional image search and as an alternative to creating images themselves. There was a common concern across senior school visual arts students that there would be no point going into career pathways in the creative industries because AI is “already better than them” at producing illustrations and digital artworks.
Many students, particularly 10s and 11s, had also experimented with music generation, and while acknowledging that it is currently pretty cringe, felt that it sometimes sounds believably human. This left music students also questioning the future of those career pathways.

Two Thorny Problems
“It’s good enough” and “it’s better than me” represent two of the biggest problems with GenAI in education, but they’re not necessarily caused by the technology itself. ChatGPT doesn’t create the conditions for students to decide when work is good enough. GenAI can’t subjectively produce art better than a human. These two problems seem to stem much more from systemic problems in education and students’ self-esteem, confidence and understanding of why we learn and create.
Many of the same students who said they used ChatGPT to write essays because it was “good enough” and allowed them to prioritise other areas of interest did not use GenAI in those areas of interest, or used it very differently. One student who coyly admitted using ChatGPT to do almost all of his English homework for the past 18 months said that he also used it in mathematics, but as more of a coach to break down complex problems and step him through how to reach the solution. He also, in a focus group we held with students one lunchtime, pointed out something that all teachers and former students of maths are familiar with: in maths, the answers are normally at the back of the book anyway, so using GenAI to find the answer is kind of pointless.
This speaks to this student’s different understandings of the point of learning and finding answers in these two subject areas. In maths, the student understood that the point was the journey, not the destination. But for English, their perspective was almost the exact opposite. The journey was a burden, the destination something to be offloaded to ChatGPT.
Intrinsic motivation is obviously a concern. This student and the others who made similar arguments about their use of GenAI lacked the motivation to go on the journey in subjects they were less interested in. Sometimes their motivation for Maths and Science came from their ATAR pathways. At other times it came from personal interest. And the converse was true. Students who loved English, writing, visual arts tended to be the most vocal refusers of GenAI.
The students who didn’t consider themselves naturally gifted artists and writers made up the lion’s share of those using GenAI for creative purposes. Alongside the ESL writers, they were in the majority of the “it’s better than me” camp. And to me, this again suggests a problem with how we talk about the journey or the creative process. In this case, when we put too much emphasis on the destination – the final polished essay, the finished piece of artwork, the complete portfolio – then of course students who lack confidence in those areas will start to look for ways to produce those final pieces.
What Can We Do About It?
There are no easy answers, but I think that based on these students’ feedback, they’re already telling us where some of the biggest problems lie.
Systemic incentives, including the perceptions of which subjects contribute most to university entrance scores, encourage students to focus on certain subjects at the expense of others. Complicating the problem, students often have a limited range of choices available to them and end up in subjects that they’re not particularly interested in. Any barriers to intrinsic motivation are going to lead to more GenAI use.
I’m not suggesting that we need to make everything “easy” for students. Learning is supposed to be hard, and there’s no alternative to the hard work of learning: I’ll talk about that more in a future article. But we can also work on making pathways through K-12 more accommodating to students’ diverse range of interests. These are scheduling and timetabling concerns, and perhaps staffing concerns, which relate more broadly to teacher shortages and initial teacher education. But they are not totally outside our circle of influence.
And for the students who believe that GenAI is better than them, we need to talk about the fact that that might be true in the sense of finished, polished quality, but it doesn’t actually matter for how we value the creative process. And again, this is something that schools have a certain amount of control over, because we are the ones who can show students that we value the process.
But do we? I don’t think so. I think right now, secondary schooling in particular values end products, exam results, end of unit assessments and grades. And whether consciously or subconsciously, students pick up on these values as they progress through secondary schooling and come to conclusions like “AI is better than me.”
We have choices to make, and they are not easy choices. We need to choose structures, systems and ways of recognising achievements that don’t lead students to prioritise some subjects over others. We need to create pathways that encourage intrinsic motivation, interest and engagement, and not just academic performance. And we need to show students that we value processes and not just products. But the hundreds of students that I’ve spoken to over the last few months are looking to us to make these difficult choices.
Subscribe to the mailing list for updates, resources, and offers
As internet search gets consumed by AI, it’s more important than ever for audiences to directly subscribe to authors. Mailing list subscribers get a weekly digest of the articles and resources on this blog, plus early access and discounts to online courses and materials. Unsubscribe any time.

Leave a Reply