GenAI is Normal Edtech

ethernet cables plugged in network switch

I’m exhausted. You’re exhausted. Pretty much every educator I speak to is – in one way or another – exhausted by GenAI. Sometimes it’s exhaustion at being bombarded with advertisements from vendors, hype from social media, and misuse by students. At other times it’s the opposite: exhaustion at the negativity, the pushback against a technology many see as time-saving, workload reducing, and helpful.

Beyond the classroom, AI is pitched as an imminent superintelligence which will either save us or destroy us, curing cancer or ending the world in a hailstorm of paperclips. In reality, it’s already showing signs of wear, with major platforms sliding into deepfakes, slop-laden video feeds, and pornography.

Either way, we’re tired. But these feelings of exhaustion and confusion are also normal. For decades in education we’ve been subject to the peaks and troughs of technology cycles. We’ve survived tech industry bubbles, taught through COVID and came out the other side, and witnessed and contributed to the rise of personal computing, the internet, social media, and every other technological movement.

In this article, I’m using Arvind Narayanan and Sayash Kapoor’s framing of AI as “normal technology” to explore why GenAI like ChatGPT, image gen, and more recent multimodal models are transformative and important, but ultimately no different to other system-level technologies like the internet, phones, and electricity.

ethernet cable connected to system
Photo by Brett Sayles on Pexels.com

What is “normal”?

Narayanan and Kapoor’s framework of AI as normal technology is partly a prediction about how AI is likely to unfold, and also a prescription, a way we should approach it, particularly in complicated human areas like healthcare and governance. The core idea is pushing back against technological determinism, the notion that the tech itself is an inevitable force driving its own future. Instead, this view sees AI as an inert technology that we have to shape and control, looking for historical parallels and not assuming a massive, sudden break from everything that came before.

It’s a compelling argument: GenAI is behaving like every other major technology that came before it, following the same predictable cycle of massive hype, bumping up against real-world organisational limits and, importantly, amplifying the social risks and inequalities that were already there.

In some ways, the “normal” technology framing is at odds with perspectives of AI which consider it to be entangled in and networked through human relationships. There are many solid arguments against the idea that AI is “just a tool”. But in this article I’m going to put those arguments aside for a while and look at what Narayanan and Kapoor’s framing can offer to the edtech discussion.

close up of electric lamp against black background
Photo by Pixabay on Pexels.com

Three speeds of progress

To understand the normal technology framing, I’m going to focus on just one aspect: the speed of progress. Narayanan and Kapoor argue that normal technologies unfold at three separate rates of progress, operating on very different timelines. These three speeds help to explain why it feels like the technology is leaping ahead and we’re being “left behind”.

Speed One: Invention

This is the pure research and development; creating new AI methods and underlying technologies. Large language models (LLMs), the transformer architecture (the ‘T’ in GPT), diffusion-based image generation and so on all represent significant and sometimes surprising leaps forward. The speed of invention is incredibly fast right now, powered by the financial bubble and high investment. Breakthroughs feel constant, and headlines abound.

Speed Two: Innovation

Innovation is slower. This stage is about developing actual products and applications using those new methods. Building a new platform based on an LLM (like ChatGPT, Copilot, or Gemini) or creating an AI coding tool (like Codex, Cursor, or Github Copilot). Market forces, investment cycles, regulations, development time and other factors slow this stage down, and so innovation lags slightly behind invention.

Speed Three: Adoption and Diffusion

Diffusion is the slowest by far. It’s the broad social process where individuals, companies, schools, hospitals and workers actually start using the technology widely, integrating it into their daily workflows. This is slow because it requires fundamental changes to everything else. Organisational structures have to change. People need new skills. Social norms adapt. Laws might need updating. Think about how long it really took for computers or the internet to change how most businesses fundamentally operated: decades. Many businesses in my local town don’t have websites (a couple don’t even take electronic payments, though I suspect that’s more of a tax rort than an aversion to technology…).

So, the technology might be ready in a year or two, but the organisation, the school system, the hospital, might need 10-20 years to really figure out how to use it effectively. The societal and economic impact doesn’t match the invention speed. It tracks the diffusion speed, which is measured in decades, especially for anything complex or high-stakes.

Think about how this applies to other “normal” technologies, in and outside of education. The internet, for example, didn’t spring to life overnight. CERN released the source code for the World Wide Web in 1993. The first graphical browser – Mosaic – was released shortly after, leading to Netscape Navigator and Internet Explorer. In 1994 we got Yahoo, followed by AltaVista (1995) and early Google (1998). Those first few years were a hectic period of development and rising public interest, and fed the dot-com bubble which eventually burst in the early 2000s.

But the adoption of internet technologies followed a different, slower curve. In 1994, only 14% of US adults had internet access. Even by 2000, this number was only 50%. Globally, the number of connected adults was only 7% in 2000. Fast innovation, slow diffusion.

Now think about your own education. I remember getting a home PC while I was in primary school, and a modem in maybe ’95 or ’96, but the single computer at my primary school was not internet-connected. I started secondary school in 1997, but I don’t recall using internet-connected computers at school until at least 1998. In my final year A-Levels I studied computing, but the focus was on Computer Aided Design (CAD), databases, and coding. Outside of school I was by that point maintaining several (awful) websites and a MySpace. Inside school the majority of my computer use was still offline, even in 2003.

Flash forward to teaching in 2019. Our regional Catholic school had internet access, 1:1 devices, and most teachers were of course making use of the internet. But we did not have Google Classroom across the whole cohort (that was introduced, like many schools, in a panicked frenzy at the start of COVID). We used Outlook for emails, a networked “K Drive” for storage, a Catholic-system-endorsed Learning Management System called SIMON for assessment and reporting, and whatever apps individual teachers stumbled across. And my experience is incredibly common.

The internet landed in a flurry of activity and anticipation in the early 90s, but it has taken over thirty years for many – and certainly not all – schools to reach a stage of consistent, meaningful, reliable internet use.

Narayanan and Kapoor’s framing suggests AI will be similar.

What is “normal” edtech?

The internet and AI are not “edtech”, but they facilitate and perhaps accelerate the dissemination of products which might be labelled as “educational technology”. The history of edtech is long and entangled with the rise of the internet, but many of the concepts – personalised learning, scaleability, 1:1 tutoring, learning analytics – have existed for much longer.

From Sidney Pressey’s 1920s-30s work on the Automatic Teacher, through the behaviourist pigeon-pecking of Skinner’s 1950s teaching machines, and onwards through waves of attempted automation and augmentation, technologies have long been presented as ways to “solve” the so-called mundane aspects of teaching and learning. Modern day dashboards, algorithms, and the datafication of students are variously presented as solutions to teacher workload, administration, student behaviour, and even attention, emotions and engagement. And for the past 20 years or so Artificial Intelligence has been an important part of those conversations.

Pressey’s automatic teacher. Image via https://www.boundary2.org/2015/04/the-automatic-teacher/

But what has all of this edtech and analytics actually achieved? Like the “normal” technology of the internet, edtech sees occasional flurries of investment, development, and innovation. Global events like the dot-com bubble and COVID spurred on the production and release of new apps, and new movements in edtech. The burst of the dot-com bubble was followed by the rise of “web 2.0” technologies, and anyone teaching in the period from around 2005-2015 has surely tried at least once to have students create wikis, blogs, or – cringe – fake social media profiles for short story characters.

Entire companies were built around these ideas. Edmodo (2008-2022) was explicitly a “Facebook for schools”. Quizlet is a user-generated-content platform that applies the logics of gamification, social capital (through “study groups”) and dashboard-style analytics. TeacherTube (2007) sprang up as a response to school districts banning YouTube. And a plethora of educational blogging platforms, wikis, and Learning Management Systems, Massive Online Open Courses (MOOCs), and other platforms have come and gone.

Again, I want you to think of your own education history. Whenever you went to school, and however long you have taught for, think about the timescale of these edtech applications. If you went to school in the late 90s-early 00s, how long did it take before your school adopted internet-based technologies? If you taught in the period from the 00s-20s, how many applications came and went? What stuck, and why? And if you’re teaching now, what edtech infrastructures exist, and do they actually work?

Now step aside from the massive hype and media coverage of AI for a second and think: what if AI is “normal edtech”? How does this framing change the way you might approach the technology in the classroom, or in school or university policy? What does that extended timeline – not right now, but from now for the next 10-20 years – do to your imagined horizons?

Narayanan and Kapoor’s article is long and worth reading in full. It covers much more than just the speed and scale of adoption, going into risks, possibilities, and implications for policy and development. I’d encourage reading the article, even if you disagree with the basic premise. These are polarising issues. Last week, when I posted an article suggesting that GenAI is a bubble about to burst, it split commenters down the middle. I imagine this article will do the same, and there will be many valid arguments against the idea that GenAI is “normal”.

But if you’re exhausted by the discourse surrounding GenAI and the seeming inevitability of personalised tutors, chatbots, and other LLM-based products, I’d encourage you to take a few steps back and consider what these products look like as “normal” technology.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Go back

Your message has been sent

Warning
Warning
Warning
Warning
Warning.

Fediverse Reactions

One response to “GenAI is Normal Edtech”

Leave a Reply