Technology is Actually Pretty Great

old computer with keyboard near wall

I’m returning to the blog after a short sort-of-hiatus, where most of the posts recently have been sharing published articles. The break was necessary for a few reasons: I’ve been working on a new online course, finishing my PhD thesis, and eking out the last days of the school holidays as much as possible. But my kids have gone back to work, and so must I…

Over the break, I got caught up in my own thoughts about technology in education. Obviously, I spend a lot of time (too much time) thinking about and using GenAI, and if you ask most teachers what their most pressing digital technology concern is right now, they’ll probably say “ChatGPT”. For the most part, it’s concern over student (mis)use, but there are also plenty of educators worried about the bigger picture problems represented by OpenAI’s chatbot: privacy, sustainability, copyright, human labour… too many to name.

And yet, OpenAI – for all of their many problems – did not cause these particular issues. If anything, OpenAI is a symptom and a product of the tech industry as a whole. If you’re at all interested in learning more about how the not-for-profit set up to usher in safe, accessible Artificial General Intelligence ended up as one of the highest valued, private, commercially driven tech companies in the world, I highly recommend my holiday reading: Karen Hao’s Empire of AI.

But this post is about broader concerns than OpenAI, broader even than GenAI technologies as a whole. It’s a post about the way we approach digital technologies in education, and why something has to change if we want to really prepare young people for a full, rich online life.

Technology is actually great

At this point, I probably sound like a curmudgeonly anti-tech Luddite – at least, that’s what commenters on my LinkedIn posts would say. But, much like notable tech critic Ed Zitron, I love technology. I’m of the generation that grew up with dial-up internet and hulking great beige PC towers with monitors that were 80% hardware and 20% screen. I have lived online from the moment it was possible to @ my way into an operator position on IRC (admittedly, it was in a channel of about six people, all from my computing class).

In fact, I think it’s a prerequisite of being a technology critic that you love technology, much like food critics love food, and art critics, art. The problem with education technology isn’t the technology itself, it’s the companies who have – deliberately – become metonymic with digital life itself. To search online, we ‘Google’. We don’t message, we ‘WhatsApp’. When we hear the word ‘Office’, we’re just as likely to associate it with Microsoft as with a brick-and-mortar place of work; frankly, it’s a wonder that we don’t ‘Amazon’ instead of shopping.

But technologies come and go. Most people my age will remember MSN Messenger, or maybe AOL Instant Messenger (AIM). If you do, you probably had a MySpace page. If you really embraced that digital side of life, maybe you had your own GeoCities webpage, complete with excessive <blink> tags. And I’m sure you have lots of fond, nostalgic memories of the internet at it was in the late 90s and early 00s. Those platforms have disappeared, largely replaced by social media like Facebook.

Facebook is not great. But that doesn’t mean that social media is bad per se. It means that Meta, Mark Zuckerberg, or the industry that created those two monsters, is at fault. I’ve made great connections on a bunch of different social media platforms, from Myspace to LinkedIn to Bluesky. And that logic extends to many digital technologies.

The corporate capture, lock-in licenses, and general panopticon-level surveillance of companies like Google and Microsoft is a huge problem in education, but I’d find it difficult to argue that writing everything by hand is better than typing into a Word or Google Doc. Office tools – documents, slideshows, and dammit even spreadsheets – are not inherently evil, even if the major companies that produce them have a whiff of sulphur about them.

I use AI for speech-to-text, type my PhD thesis in Word, and record podcasts and online courses surrounded by digital equipment. I love all of it. But I didn’t learn these skills in school, and neither did most people my age. And before we blame education for that gap, we need to ask: whose interests does that blame serve?

Reframing the problem

More often than not, the education system (or educators themselves) are blamed for problems in adopting technologies. We’re “too slow”, our methods are “out of date”, and we’re “failing to prepare students for the real world”. But blaming education for the technology industry’s problems will get us nowhere.

The real problem is that the tech industry has successfully convinced education that “digital literacy” means proficiency in a handful of apps and platforms. Google doesn’t want students who understand how search algorithms work, how data is collected and monetised, or how to evaluate the trustworthiness of information systems. They want students who know how to Google. Microsoft doesn’t want young people who can think critically about cloud infrastructure, data, or vendor lock-in. They want Office users.

Tech companies have spent decades positioning themselves as partners in education, offering “free” tools and curriculum resources that just happen to train students in their specific ecosystems. The implicit promise is that digital literacy looks like knowing keyboard shortcuts in Google Docs, or how to format a PowerPoint, or which button to click in Canvas to pretty-up your poster. It’s brand loyalty dressed up as education, and worse, it sidelines a lot of great experiences with technology.

Real digital literacy would mean teaching students why search engines return certain results, how their data is being used, what alternatives exist, and when to be skeptical of platforms that offer free services. It would mean students who ask “who benefits from this technology?” and “what would I do if this platform disappeared tomorrow?” The tech industry has no interest in students who ask those questions. They want consumers, not critics.

Preparing students for a world that doesn’t exist

Another problem is that the tech industry has built its business model on planned obsolescence, but in education we’re trained to treat their platforms like permanent infrastructure.

MySpace, GeoCities, and MSN Messenger weren’t fringe products: they were the way people communicated and created online. And then they vanished. Not because the technology stopped working, but because the companies behind them decided they weren’t profitable enough, or got bought out, or simply moved on to the next thing.

Now think about how we teach technology in schools. We train students on Google Classroom, or Canvas, or whatever learning management system the school, sector or district has locked into a five-year contract. We teach them to collaborate in Microsoft Teams, to organise their lives in OneNote, to submit work through Turnitin. These platforms are treated like stable, unquestionable fixtures of digital life, inevitable and ineffable.

But they’re not. They’re products, controlled by companies whose primary obligation is to shareholders, not students. And when those companies decide to pivot, rebrand, sunset a product, or jack up prices, schools are left scrambling, and students are left with skills that no longer transfer. Have a look through the Wikipedia entry for discontinued Microsoft products. If you’ve been teaching as long as I have, then you’ve probably explicitly taught students how to use one or more of these applications. And you may have taught them as “industry standard” apps that students would have to learn to succeed in the future.

The myth of inevitability is designed to put pressure on users to use right-now technologies. But we simply can’t prepare students for the technologies they’ll be using ten, twenty, or thirty years from now.

What can we do about it?

So if technology is great, and education isn’t the problem, then what can we do about it? It’s easy to feel a little helpless: these are multi-trillion dollar companies with their hands so deep in government pockets they can make politicians dance like ventriloquist’s dummies. They literally shape the language we use to describe digital technologies. But like I said earlier, they are not “technology”.

The first step is rejecting the myth of inevitability. When a tech company tells us their platform is the “industry standard” or that we’re “falling behind” if we don’t adopt it, we can recognise that for what it is: marketing. There is no inevitable technological future written in the stars. There are only choices about which technologies we fund, adopt, and normalise.

And there are alternatives everywhere, if we’re willing to look for them. The digital commons is full of open-source tools that do exactly what proprietary platforms do, without the surveillance, the lock-in, or the sudden price hikes: something which the European Union is currently exploring to unhitch itself from US tech.

We can also question whether we need cloud services for everything. Do student essays really need to live on Google’s servers, or could they live on a school’s own infrastructure? Do we need a proprietary learning management system, or could a simple file-sharing setup do the job? Sometimes the low-tech solution is the better solution; not because technology is bad, but because less technology means fewer dependencies, fewer vulnerabilities, and more control.

This is not about rejecting technology. It’s about being deliberate, choosing tools that align with our values, that respect student privacy, that won’t disappear when a company pivots its business model. It’s about supporting the developers and communities building alternatives to big tech, and teaching students that the online world doesn’t have to be owned by five companies.

The tech industry has spent decades convincing us that their way is the only way. But remember: we built the internet on open protocols and shared standards before corporations enclosed it. We can build it that way again. We can choose federation over centralisation, community over monopoly, sustainability over growth at all costs.

Technology is great. But it’s only great when we get to shape it, rather than letting it – and the companies that profit from it – shape us. I don’t know what platforms my students will be using in twenty years. But the future of technology in education doesn’t have to look like the present. We just have to stop treating corporate platforms as inevitable, and start building the digital world we actually want.

Fediverse Reactions

2 responses to “Technology is Actually Pretty Great”

  1. @blog BINGO:
    “The real problem is that the tech industry has successfully convinced education that “digital literacy” means proficiency in a handful of apps and platforms.”

Leave a Reply