I stumbled across an article a couple of weeks ago on LinkedIn with a distinctively clickbaity headline: “How Pedagogy Can Catch Up With Artificial Intelligence“
Like many of its kind, the article and subsequent comment-thread discussions on LinkedIn were predicated on the idea that education is broken, teaching methods are outdated, technology moves faster than teachers, teachers need to learn “AI literacy” or “AI pedagogy” or whatever else we’re calling “understanding AI” this week, and that students will somehow be left behind if schools don’t immediately adopt AI technology.
I’m calling bullshit.
Any educator that has used artificial intelligence as part of their day-to-day work will know that there are profound limitations to the technology. Anyone who has actually worked in a classroom in the last 10 years will know that, despite claims that education is broken, teachers continue to do incredible things in spite of huge societal and technological shifts.
The idea that education is like a broken-down wagon that needs to be hitched to the stallion of technology and dragged into the future is mainly attractive to investors and tech CEOs, most of whom haven’t actually stepped foot into a classroom since they were students themselves.
The view that technologists have of education is condescending, bordering on contemptuous. It’s also evidenced by the facile and infantilising technologies currently being promoted into classrooms under the guise of “supporting teacher workload” and “revolutionising education.” Technologies which “add sparkle” (emojis) to generic assignments, or generate lesson plans at the click of a button.
It’s worth investigating the pedagogical understanding of these technology company CEOs, because if you’ve worked in education recently and you’ve used AI, it’s pretty clear that pedagogy doesn’t need to catch up with artificial intelligence: it’s the other way around.
Powerful LLMs like GPT-4o continue to grow and develop in interesting and exciting ways, increasing their multimodal capacities by bringing image recognition, image generation, audio, video capabilities, complex coding skills, and a host of other features to these technologies which are incredibly impressive and versatile. But the huge dataset which gives language models like GPT their power also reflects some of the worst aspects of education that technology companies claim to be pushing us away from.
The pedagogical understanding of artificial intelligence is almost non-existent, something which becomes immediately apparent the moment you start trying to generate lesson resources with the language model. If you use a simplistic prompt to create a lesson plan in ChatGPT, for example, it will instantly reveal its biases towards US-centric curricula, outdated educational theory, and atrocious pacing of lessons.

LLMs also demonstrate a lack of understanding of the real, embodied elements of classroom space, time, relationships, and interactions between students. That’s why a language model will confidently tell you that in the first 10 minutes of a lesson, you can introduce a topic to students, get them into groups, stage a formal debate, and report back to the class. Seasoned teachers will know that 10 minutes is just enough time to get students into groups, or in some cases, to get students into the classroom.
Artificial intelligence will frequently suggest debunked educational theory as “pedagogy,” things like kinaesthetic and visual learners, for example. Artificial intelligence has pathologizing and neurotypical worldviews regarding neurodiversity, making blanket assumptions about what ADHD, autistic, and dyslexic students need. It recommends out-dated practices, exclusionary and ableist tactics and methods which any teacher who’s been trained in the last 10 years would have learned through their university course are not acceptable. But because generative artificial intelligence can’t reflect or understand the kind of data it’s reliant on, it’s drawing from a homogenous corpus of data that illustrates these problematic views.

ChatGPT’s training is US-centric, typically aligned to the Common Core curriculum, which presumably weighed heavily in the data as far as educational content is concerned. It lacks a nuanced understanding of indigenous culture (from any demographic, including Australia) and different ways of knowing and being which aren’t captured in text-based data.
A qualified human educator understands how to read a room and respond to individual students’ needs, to go away after a lesson and reflect on what worked and what didn’t. Artificial intelligence cannot do this.
To suggest that pedagogy, the understanding of the educational practice of teaching and learning – the art of expressing an idea clearly to a diverse group of students – needs to somehow be “updated” because of the statistical pattern-matching of a large language model technology is frankly absurd. Artificial intelligence and technology companies more broadly need to update themselves and catch up with pedagogy.
Technology developers need to spend some time in classrooms and make have their entire teams interact with educators, to develop the understanding that despite the clichés of forward-facing, Victorian-style classrooms they might remember, education has continuously evolved and changed over the past decades.
Whilst it may be a struggle for systems to keep up with technological and societal change, educators are not systems – they are individuals. They are capable of adapting, they are capable of doing great things with students in spite of and because of technological advances.
Rather than the contemptuous line that “education is broken and technology is going to fix it,” we need to shift the narrative and start asking questions, celebrating what’s great about education.
Get in touch to discuss how Generative AI can be brought into your school or university in ways which respect educator autonomy, and foreground the ethical concerns of technology. Sparkle not included.

Leave a Reply