Since OpenAI’s ChatGPT launched in December, there has been plenty of speculation about the implications for education. Articles ranging from The End of High School English to ChatGPT as a saviour for time-poor teachers have filled people’s feeds for the past month. We’ve seen ChatGPT labelled everything from”mind-blowing“, to a threat to the future of writing, and just a “glorified cut and paste“.
So what’s the truth about ChatGPT and the future of AI in education? We don’t really know. But we do have plenty of technological antecedents we can look to and, hopefully, learn from.
The use of digital technologies in education has seen highs and lows. Since the late nineties we’ve seen consecutive waves of edtech promising the world. From apps to reduce the “burden” of assessment and feedback, to the underwhelming mid-2000s MOOC-frenzy, to modern day Learning Management Systems, each iteration of edtech has been an equal mix of hype and disappointment.
In recent years, we’ve been forced into an increased reliance on edtech because of remote learning – an experience which left many feeling disillusioned with the promise of digital technologies in the classroom. For every advance in the technology, there was an equivalent setback. As much as platforms like Google Classroom and Microsoft Teams enabled us to stay in touch with our students during the pandemic, they proved a poor substitute for face-to-face learning.
Although some remnants of hybrid learning remain, the majority of schools have moved back to full time face-to-face learning. Promises of edtech revolutionising education – even after the massive scale up during remote learning – have largely fallen short.
And then in December, ChatGPT launched, heralding a new wave of the technology-as-saviour narrative. Some have claimed that language models like ChatGPT will bring about the “golden age” of edtech. Personally, I think we’d be better off learning from the mistakes of the past.
This technology will be revolutionary – I have no doubt about that. But, as Vincent Mosco pointed out in his 2004 book The Digital Sublime, real revolutions don’t come from hype. Mosco compares computers to electricity: for all the grand claims that electricity would “light up the streets, end crime, and bring peace and harmony to the world,” the real benefits did not come until the technology was ubiquitous and even banal.
ChatGPT is an app built on top of a powerful model, and one that will only continue to improve, along with its competitors. We need to shift our attention away from the immediate prospects of ushering in a golden age of edtech, and consider a future where AI sits alongside calculators and spreadsheets as part of the educational furniture.
Just because we can, doesn’t mean we should
I’ve been playing and working with ChatGPT since its launch. I’ve written about prompt engineering, and the potential for ChatGPT to shift the narrative in education towards more human (or humane) styles of assessment. Out in the secondary community, many people are posting great content for teachers interested in ChatGPT. From Nick Jackson’s great free course to Anna Mills’ incredible collation of resources. These are fantastic examples of educators engaging critically and creatively with the technology.
I’ve also seen plenty of attempts to use the model to make aspects of teaching more efficient. From generating lesson plans to entire units, writing reports, and grading assessment tasks, developers are building “time saving” apps all over the place.
Can ChatGPT be used to write detailed unit outlines in specific styles, such as the much-used “Understanding by Design” framework? Sure:
What about writing “essential questions”, or generating report comments?
Too easy. But that’s part of the problem: it’s too easy.
Teachers will use these technologies to save time, and developers will use the well-publicised workload of teachers to profit from time-saving apps built on top of language models. It’s the edtech cycle all over again.
Breaking the cycle
If we want AI in education to go beyond the hype and develop into a technology that we can trust and use effectively, we need to be prepared to go beyond the surface-level uses available to us.
In his 2019 book Should Robots Replace Teachers? Neil Selwyn lays out a challenge to edtech developers:
If technology developers want an educational grand challenge or ‘moonshot’ opportunity, then they might attempt to show us a genuinely new way of doing things, rather than attempting to ‘efficiently’ replicate what is already being done.”Neil Selwyn, Should Robots Replace Teachers? 2019
To get to this “genuinely new way of doing things”, Selwyn argues, educators need to be more involved in the entire process of developing and using digital technologies. This is technology as a tool for empowerment, not just efficiency.
We saw echoes of this last year outside of digital technology. When Sarah Mitchell proposed that NSW teacher’s would get banks of pre-prepared materials to ease their workload, there was an outcry from teachers who would much rather have more time and resources to develop their own quality lessons. This deskilling of teachers in the name of efficiency is exactly the path we’ll follow if we only use AI like ChatGPT to outsource tedious parts of the job.
Instead, we need to be engaging in two critical conversations:
- If it can be done by an AI, does it need to be done at all? and
- How do we use AI to augment, rather than outsource and replace, our most crucial roles in education?
I’ll leave the first question for another time: there is plenty of bloat in our education system that is worth reviewing.
As far as the second question goes, to break the edtech workload/efficiency cycle we need to consider what human teachers are best at, and how we can use AI to support those aspects of our vocation.
In Selwyn’s book he proposes “Four Laws” for AI in education. The fourth is of particular interest here:
Public, policy and professional debates about AI and education need to move on from concerns over getting AI to work like a human teacher. The question, instead, should be about distinctly non-human forms of AI-driven technologies that could be imagined, planned and created for educational purposes.”Neil Selwyn, Should Robots Replace Teachers? 2019
The “law” reflects a comment from Garry Kasparov, the world champion chess player who was famously defeated by IBM’s DeepBlue AI in 1997. Kasparov commented that the machine beat him because it did not play like a human. Selwyn argues that this is a key point in working with AI.
He also suggests a number of ways in which humans make far better teachers than machines, including: the ability to reflect on our own learning; improvisation; thinking out loud; and making interesting social and cognitive connections whilst teaching.
I think these are the areas we can focus on when we think about how large language models and other AI might function in our classrooms.
I’m going to end this post with a few examples. This is still new ground, and I don’t claim to have all the answers. My own opinions about the technology change daily. The ethical concerns – from bias to environmental issues, data privacy to class divides – aren’t going away. But for now, I’m willing to keep experimenting and trying to push past “efficiency”. I’m going to base my examples on the “human advantages” from Selwyn’s book.
Using ChatGPT in education: beyond efficiency
Reflecting on our own learning
If you haven’t read Mike Sharples’ Story Machines yet, you should check it out. Sharples writes a lot about AI’s inability to reflect on its own practice. This is one of the things that leads to AI’s generating biased and even dangerous content, and is one of the biggest challenges for developers like OpenAI to address.
But we can reflect on our knowledge. Language models are inherently flawed, but humans aren’t free from bias either. Where we differ is the ability to seek out, acknowledge, and act on our blind spots.
I think a language model could help a teacher to analyse a lesson plan or unit of work and identify any gaps, silences, or problematic areas which might be inadvertently biased. Imagine, for example, dropping a lesson plan into ChatGPT with the following prompt:
This is a lesson on The Hero’s Journey for a Year 7 English class. Identify any gaps, silences, or potential hidden biases in what I have written: <insert unit plan>
I tried the above prompt with a real unit plan, and here’s the response it gave:
With all of the conversations around bias in machine learning, I’m hopeful that this is one area where the technology will continue to improve. Eventually, it might reach a point where it is genuinely useful at highlighting our own hidden biases.
Being able to improvise is definitely out of the realm of current AI language models. The machines can’t “think”, let alone think on the spot and in novel and interesting ways. We certainly can improvise, but it can be difficult to think flexibly in the middle of a 50-minute lesson during period 6 on a Friday, faced with a room of twenty-something tired adolescents.
So what if, during the lesson, we start using AI as an “improvisation aide”? For example, if a student is struggling with a particular concept, we might not have the energy to think up novel alternative explanations. But ChatGPT has no such problem, and it makes a great analogy-engine.
Picture the scene:
Teacher: <explains Non-Linear Equations>
Student: I don’t get it
Teacher: <explains it again, with a different example>
Student: <blank expression, shrugging. Yawns>
Teacher, to ChatGPT: Generate three novel ways to explain Non-Linear Equations to a grade ten student at 2pm on a Friday:
Imagine being able to generate on-the-fly essay topics for students who have a particular interest, or following a discussion of a text that has gone down a rabbit-hole. Or being able to quickly draw up scenarios and role plays for students to act out to help clarify their understanding of new concepts. AI has the potential to supercharge a teacher’s natural capacity to improvise and create engaging lessons.
Thinking out loud
AI systems are notorious “black-boxes”. What happens in the hidden layers of the neural network is so mysterious that even the people who create them don’t know what’s going on. But learning happens when we break apart the black box, which is why think-alouds are so effective in the classroom.
Sometimes, however, a teacher might be as much of a black box as an AI. When I write, I don’t always know exactly where my ideas have come from. So, similar to the improvisation example above, a language model might be able to help explain our own choices.
Using the previous two paragraphs, and the following prompt, I asked ChatGPT to help articulate my thinking.
I’m trying to explain my thinking. I wrote the following passage, but I’m not really sure how I got there, or where I’m heading. Make some dot-point suggestions about my possible thought processes with this piece of writing: <insert paragraphs>
The response is basically a blow-by-blow account of the meaning, but it offers a clear explanation of the logic in paragraphs. It also points out that I have used an analogy, which might lead to an interesting teaching moment about analogies in general. When I wrote the paragraph, I didn’t intentionally use an analogy – I probably included one subconsciously because of the earlier passage when I referred to ChatGPT as an “analogy-engine”.
When Neil Selwyn writes about connections, he includes “cognitive connections” – our ability to link together new and surprising bits of information based on everything we have learned and experienced. Language models won’t spontaneously make these kinds of surprising connections, but they can be asked to do so.
Combinatorial creativity is the idea that novel inventions come from combining earlier ideas – making the kinds of cognitive connections humans are great at. This is why famous inventors and philosophers frequently have very broad interests.
In the classroom, ChatGPT could be used to help a teacher make interesting combinations of knowledge in ways that could surprise or inspire students. The students themselves could also use the technology in this way.
To take this one step further, we could also think about the social connections that humans are better at making than machines. I don’t know a lot about sports, but I do know that many of the students I’ve taught in the past few years are passionate about basketball and AFL.
In an extension of the analogy-engine idea, language models can easily create connections between the teacher’s subject knowledge and the students’ passions.
As it becomes increasingly likely that language models will augment or replace traditional search engines, it will become easier for teachers to bring themselves up to speed on the things their students are passionate about. Building these kinds of relationships makes teaching easier, and more human.
The future of ChatGPT
The future of ChatGPT remains uncertain. There is wild speculation about Microsoft’s investments, and plans to incorporate the technology into its search engine and Office products. OpenAI have release a waitlist form for a premium, paid version of ChatGPT. And OpenAI’s competitors, not least of all Google, will be along shortly with their own models.
The fact is, the future of ChatGPT specifically doesn’t matter much in education. Like any app, it will come and go. There will be promises made and broken, and expectations which go unfulfilled. But if we take what we’ve learned from edtech 1.0 and apply it to this new kind of technology, then it doesn’t matter what comes next.
The future of education is going to look very different to today’s structures. Language models and other forms of AI will be an important part of that change, but digital technology is far from the only factor impacting our system. It’s down to educators to grapple with the technology, avoid the efficiency trap, and build better, more equitable, and more inspiring applications than those we’ve seen in the past.
If you know a school or sector leader who needs to hear this, please share the post! Got something to say, or a question about AI in education? Get in touch: