Beyond Cheating: Why the ban and block narrative hides the real threats of ChatGPT in education

Upscaling image #2 with An open book with a pen and a calculator on top, with a Chatbot interface open on a laptop in the background, dramatic, cinematic, dark --ar 3:2

If you’re not familiar with this technology and you’d like to understand the basics about AI, large language models, and ChatGPT, then check out this post first.

This post is part of a series exploring ways we could (and ways we shouldn’t) bring large language models like ChatGPT into education. The first post, ‘Beyond Efficiency’ can be found here:

Amidst all the media hype surrounding ChatGPT, the prevailing fear is that students will use it en masse to cheat in their secondary and tertiary assignments. Aside from the fact that most students haven’t actually returned to the classrooms and lecture theatres yet, this narrative obscures more complex threats.

Banning and blocking ChatGPT in education is not just impractical, it’s irresponsible. This post explores some of the potential perils of the “cheating machine” narrative in the hope that we can start to explore other options.

Widening the Digital Divide

“Digital divide” is a term that’s been thrown around a lot in recent years, particularly as COVID and remote learning highlighted the very real gap between people in our communities who have access to digital technologies, and those who don’t. What is less often explored, however, is the impact that policy and approaches to technology in education have on the divide.

One argument of the pro ban/block side is that ChatGPT and technology like it will widen the gap by allowing those with ready access to the technology to gain an unfair advantage over those who don’t. But the digital divide is a complex techno-social problem. It’s not a simple thing to fix, as failed “One Laptop Per Child” initiatives and scorned programs from major tech firms like Meta have proven. Banning ChatGPT won’t fix the divide, but it might just make it worse.

In Australia, system-wide technology bans have tended to be implemented by state schools, but not Catholic or Independent sectors. We saw this with mobile phones in recent years, and with the blocking of YouTube from Department school networks back in 2007. This puts state school students at an immediate disadvantage over students in other sectors, as it denies them access to technologies which may be useful, and which are certainly used in industry and life after school.

More importantly, however, it widens the gap within the state sector. Students in state schools who have access to their own devices – whether that’s phones, tablets, laptops, or devices at home – will still find ways to use ChatGPT. Just as students flaunted the phone bans and access blocked sites anyway, they’ll access the language model despite the ban.

So the only people a ban really impacts are those students who already lack access to devices. Students who, for whatever reason, cannot access a phone or do not have a device at home will be doubly disadvantaged compared to their peers.

smartphone in a classroom frozen in time + cinematic shot + photographs taken by Minolta, Leica Camera, ARRI, Nikon, Kodak, Sony, Hasselblad + incredibly detailed, sharpen, details + professional lighting, photography lighting + 50mm, 80mm, 100m + Lightroom gallery + Behance photography --ar 3:2 --q 2
The jury is still out over whether banning mobile phones in classrooms is actually effective. Image via Midjourney – prompt in alt text

When Censorship Backfires

We’ve also seen plenty of instances when attempts to block or censor technologies has the exact opposite effect and makes the subject of the ban more appealing. This isn’t just because teenagers are notorious for doing the exact opposite of what they’re told: censorship can lead to an explosion in people seeking alternate ways to access banned technologies.

Blocking ChatGPT will only add to the ongoing storm of media hype, quite possible resulting in more students using it to cheat. I’ve had a few conversations recently with ex-students of mine who are about to start their tertiary studies. By their own admission, they don’t read a lot of news. But they’ve heard of ChatGPT, and almost everything they’ve heard has revolved around cheating.

By censoring ChatGPT and contributing to the cheating hype, we risk a self-fulfilling prophecy where students are more likely to learn about the technology, seek it out, and use it to cheat on assignments.

Heads in the Sand

There are far greater ethical considerations with AI than academic integrity. The “algorithmic bias” of large language models, for instance, is well documented. Bias inherent in the huge training datasets scraped from the internet makes its way into these models and is then reproduced in the output. This can result in models producing overtly sexist, racist, ableist, and otherwise discriminatory language.

But that’s not the only ethical concern. Even seemingly benign output reinforces a static snapshot of the world according to the scraped data. ChatGPT, in its current form, has a dataset that ends in 2021. This means that every output it provides is predicated on a knowledge base that ends in that year. To understand the problem with this, imagine if you were talking to someone whose knowledge ended in 1930, or 1955, or 1972. Their worldview would be significantly different to yours. We expect societal values and norms to change over time, but with current generation language models the worldview is encoded and static, baked in to the technology and reinforced with every output.

Articles outside of the “cheating” narrative have exposed other serious ethical concerns. A Time magazine piece on the treatment of low-paid Kenyan workers hired to sanitise the data shines a light on one of the darker aspects of the technology. To avoid the aforementioned bias, as well as graphic violent and sexual imagery, organisations like OpenAI rely on manual filtering by people working in terrible conditions.

All of these ethical issues and more contribute to the “shadow side” of AI. This doesn’t mean to say that we should ban the technology, however: quite the opposite. If we assume ChatGPT will be used by our students – and it will – then we have an obligation to have open and frank conversations about these concerns.

We can help our students to come to their own conclusions about how, when, and even if to use the technologies rather than simply sticking our heads in the sand and hoping someone else will deal with the bigger problems.

Processing…
Success! You're on the list.
Photo by Anthony Choren on Unsplash

The Hidden Cost to Teacher Workload

Whenever a technology is banned, the policing and enforcing of that ban ultimately falls to the teachers. Although it will be state and sector bodies who make the decision to block or not, it will be teachers who are faced with challenging students, contacting parents, confiscating devices, and poring over assignments to check for evidence of AI assistance. This isn’t just making sure students aren’t using calculators under the table in a tech-free Maths test: it will require constant vigilance and a healthy dose of suspicion.

At a time when teacher workload and the teacher shortage crisis is reported on almost as often as this cheating narrative, it seems absurd to add another arbitrary burden to teachers’ work. It also perpetuates the “us versus them” narrative of students and educators, with the latter being seen as authoritarian sources of expertise and truth. These technologies are very new, and we have yet to see their full potential. To assume that we have the right to decide for students whether it is an appropriate and acceptable technology seems arrogant, and reminiscent of outdated models of education.

We should be working with students, not against them. ChatGPT, and what it represents, is not another battle that needs to be fought (and probably lost).

teacher bent double carrying burden of pile of desktop PCs portable computers and monitors on shoulder + blackboard + chalk --ar 3:2 --q 2
Should the burden of policing technology fall to teachers?
Image via Midjourney – prompt in alt-text

The Real Threats of ChatGPT

Will some students use ChatGPT to cheat on assignments? Absolutely. Is that a reason for banning or blocking the technology? Definitely not. The real threats ChatGPT poses to education go beyond issues of academic integrity. Banning the technology will worsen the digital divide, drive more students to use it to cheat, prevent us from educating students of the real ethical concerns, and drive teacher workload even higher.

We can do better.

I want to see schools, universities, and education sectors grappling with the implications of these technologies – the good and the bad. I want to see robust conversations happening in classrooms about the full spectrum of ethical, social, and academic concerns with using AI. I want to see teachers coming up with inventive and creative ways to use the technology to demonstrate to students its capacity beyond writing sub-standard but ”passable” essays.

I’d love to see students and teachers working together to create the next generation of these technologies, and helping to shift the attention away from cheating, and towards the things that really matter.

Got a question or something to say about this article? Get in touch:

One response to “Beyond Cheating: Why the ban and block narrative hides the real threats of ChatGPT in education”

  1. […] in January, I wrote a post called Beyond Cheating, reflecting on the ChatGPT bans that were rolling out across various Australian states and the […]

Leave a Reply

%d