December 2022 represented a step-change in Artificial Intelligence (AI) technologies. With the release of OpenAI’s ChatGPT, suddenly the whole world (or at least the millions of users who logged in within a week of its launch) were talking about Large Language Models. LLMs represent just a portion of AI technologies, but ChatGPT shifted the public narrative in ways we hadn’t seen before, even with impressive image generation applications like DALL-E 2 and Midjourney.
LLMs are a form of machine learning trained on enormous datasets of written text. The algorithm develops a probabilistic model for guessing the next word in a sentence based on text it has seen before, leading some AI experts to label it as glorified cut and paste. OpenAI’s model – though by no means the largest or even most powerful – captured the world’s attention because of its open access and “chatbot” user interface.
Practical Strategies for ChatGPT in Education ran as a live webinar in February. To access a recording of the webinar, click here:
People were quick to jump on the ChatGPT bandwagon, creating raps, Shakespearean sonnets, and prompts for AI image generators. The discussion quickly escalated in education, where both secondary and tertiary educators began to divide down dystopian and utopian lines. On the one hand, some have claimed that ChatGPT will spell the end of formulaic assessment and usher in a new age of more human education. On the other hand, we have seen everything from the death of high school English to a fear that AI assisted writing will usher in an epidemic of cheating and academic dishonesty.
The truth of the matter is likely somewhere in between. As others have pointed out, the hype cycle of AI is nothing new. New, disruptive technologies have often been heralded with overblown claims and end-of-the-world pessimism. One fact is clear, however: whether it’s brought in by students, teachers, or corporate entities, AI is in our classrooms and it’s here to stay.
Getting creative with AI
While some districts have rushed to ban ChatGPT, many educators and academics are advocating for integrating the technology into classrooms. In Australia, there is a split between states, with QLD and NSW blocking the ChatGPT website while SA and VIC remain unblocked. Dr Nick Jackson, Director of Digital Technologies and Learner Management Systems at Christian Brothers College, Adelaide has created a free online course for educators which includes advice on how to use ChatGPT across different subject areas, for administrative tasks, and for assessment and feedback.
Elsewhere, people have tested the possibilities of ChatGPT as a teacher’s aide, a creative writing partner, and helping people to learn coding. These examples all demonstrate something we’ve long known about educators in general, but which was exemplified during the COVID lockdowns. Teachers are often quick to adopt and adapt to new technologies, and willing to share what they find with the community.
I wrote a post not long after the release of ChatGPT, describing how better prompting leads to better results. It has been one of my most successful posts and joined a long line of commentary around the future of “prompt engineering” or “prompt whispering” which might become a feature of using LLMs.
While we’re all having fun with ChatGPT, however, it is important to stay mindful of the shadow side of AI. If we are going to use these technologies in the classroom, educators need to be aware of what we’re really handing to our students: both the good and the bad.
Pulling back the curtain
Kate Crawford’s 2021 book Atlas of AI provides an excellent introduction to the murky world of AI ethics. Crawford, an Honorary Professor at the University of Sydney, writes about many of the ethical concerns facing the AI industry, from the environmental impact to the social. For educators considering adopting AI in the classroom, Atlas is a must-read.
Take, for example, the well-publicised algorithmic bias present in many AI datasets. This is the bias – including of race, gender, heteronormativity, social class, and disability – inherent in the large sets of data AI models are trained on. There has been much focus on the tendency of AI models to produce racist and sexist content. Less publicised, however, is the human aspect of this bias.
It’s easy to see the biased datasets as a machine problem – “rubbish in, rubbish out”, as the computer science cliché goes. ChatGPT, on its initial release, was quickly revealed to share these flaws as people broke its guardrails and produced plenty of questionable content. Crawford, however, devotes a chapter to exploring Classification: the (often low-paid) human labour used to label images and text, feeding the always hungry algorithms.
Crawford explores the “unspoken social and political theories” which underpin the classification of data. Using the popular ImageNet dataset as an example, she explores how prejudices make their way into the datasets in the first place, arguing that we can’t simply program our way out of the bigger societal issues highlighted by algorithmic bias.
In fact, avoiding “blaming it on the machines” is prevalent throughout Crawford’s work. It is easy to mythologise AI, seeing it as an almost magical technology which spits out unpredictable and occasionally awe-inspiring results. AI processes often happen in a black box, meaning that we cannot foresee the output or understand the rationale. This “magic” of AI – something akin to the Great and Powerful Oz manipulating the Emerald City from behind his curtain – is also something we should question.
The University of Sydney’s Dr Benedetta Brevini also writes about the perils of “AI myths”, critiquing a discourse that not only serves the status quo of consumerism and productivity but also suggesting that these myths serve to paint AI as our “technological saviour”.
Amidst all the hype surrounding AI and its obvious potential, educators need to remain mindful of the dark side of the industry and stay open to criticisms that knock AI off its techno-solutionist pedestal.
To say that it’s inevitable that educators will have to deal with AI technologies seems like a form of techno-determinism. However, I believe it is fair to say that AI will make an appearance in our classrooms in one form or another – if it’s not there already. From the data collection and analysis present in Learning Management Systems through to students using ChatGPT to write their Pride and Prejudice essays, AI will creep into our classrooms in all kinds of ways.
I’d like to see educators adopt an approach suggested by Neil Selwyn in his (2018) book Should Robots Replace Teachers? We need to see AI in education as a “complex and highly controversial matter” and not content ourselves with allowing AI to enter our classrooms while we’re not looking. To do this, we need to be able to use AI creatively and critically.
By engaging with AI systems like ChatGPT, we discover for ourselves the limits and possibilities of the technology. By making ourselves aware of the ethical, cultural, and political concerns behind the technology – adopting what Selwyn and others refer to as a socio-technological view – we can address the questions of when, if, and how we should use the technologies.
Got a question or something to say about AI in education? Get in touch: