It’s easy to look back through history at the procession of technologies which fill our day-to-day lives and imagine that things were always bound to end up this way. From ubiquitous infrastructure such as telephone lines, electricity and the internet to services that we already take for granted, like music and television streaming, it can seem as though the trajectory of these technologies has been inevitable since day one. But making sense of the past by finding patterns in an incoherent jumble of events is something that humans are great at.
Take electricity, for example. It’s fairly well known that the development of electricity wasn’t a single breakthrough by one person, but a long series of discoveries, inventions, and mistakes. By the time Edison was conducting experiments with electric lighting and distribution, the foundations of electricity had already been laid by many before him. The journey could easily have taken a different turn. Had Nikola Tesla’s innovations in alternating current gained even earlier dominance, our modern electronics might look very different. Without the widespread adoption of AC, we wouldn’t have the power grids we rely on today. Without reliable power systems, the development of electronics would have been much slower. Without transistors, we wouldn’t have radios, computers, or smartphones. Without computers, there’d be no internet — and without the internet, no Netflix. The horror.
But of course, it’s impossible to say whether some equivalent wouldn’t have sprung up in its place. Similar stories can be told of phone lines, fibre optics and the acquisition and repurposing of countless apps and services which dominate our daily lives. Although they may seem it in retrospect, none of the technologies we use today were ever inevitable.
And yet, pattern-matching creatures that we are, we plot those courses through history, and not only that, project them into the future. Which is why it’s unsurprising that artificial intelligence, and in particular the generative AI of large language models and related technologies, are now being touted as an inevitable part of our future. But just as television or electricity or the internet in their current forms were never inevitable, we don’t need to agree with OpenAI CEO Sam Altman when he tells us that in just a few years’ time, we’ll all be dumber than the machines.
Ubiquity ≠ Inevitability
What Altman is pushing for is ubiquity, and specifically the ubiquity of his product. This has nothing to do with altruism or the future of humanity, and everything to do with simple economics. If OpenAI’s technology is ubiquitous, OpenAI will make more money. But we shouldn’t downplay the importance of ubiquity. Monopolising and saturating the market has worked well for companies from Coca-Cola to Apple, Meta to Nike, and it’s a business model that may well continue to be successful for companies like OpenAI.
And yet, ubiquity is not inevitability. Mark Watkins summed this up recently on his Substack:

It already seems as though Altman and Co. have succeeded in making artificial intelligence unavoidable in schools and universities, from statewide partnerships with Microsoft in Australia to freemium education packages for university students in the US accessing ChatGPT, Claude and Gemini, and even on an individual university basis, to students who’ve been mandated to reflect on their use of the technology. Artificial intelligence is everywhere you look.
But ubiquitous as the technology may seem, educators and students should rightfully be pushing back against discourses of inevitability. Inevitability suggests a certain hopelessness, a sense of having given up. Inevitability is a shrug of the shoulders and a quiet grumbling acquiescence to the way things are, the way things will be.
As I’ve noted in previous posts, the ultimate aim of technology companies is to make AI “disappear into the woodwork,” because ubiquitous, invisible technologies are harder to critique. When technologies fade into the background of our daily lives, we stop questioning their presence, their purpose, and their effects.
“After all, if you do not resist the apparently inevitable, you will never know how inevitable the inevitable was.” — Terry Eagleton, Why Marx Was Right
Resisting Inevitability
Recently, I’ve seen several posts from educators on LinkedIn lamenting that continued discussions of AI ethics in education and the growing volume of critique is “getting in the way” of teaching students how to use the technology, and getting in the way of teachers using it to improve their productivity.
Frankly, I think that a certain amount of education’s responsibility is exactly to get in the way of technology, because only the energy of conscientious resistance and critique can address those discourses of inevitability. And what is the point of education, if not to ask hard questions, raise difficult conversations, and encourage thought?
Among those hard questions should be: What are we giving up when we stop resisting the discourse of inevitability?
We should know by now: anytime someone maintains that a technology is inevitable, they’re asking us to give up any say we might have over that technology – asking us to give our ability to question, to alter, to refuse; they’re asking us to abandon our agency, the control of our future.
– Audrey Watters: Automating Distrust
I’ve never met a Year 12 student in Australia who can’t operate a smartphone, but I’m not aware of any schools teaching them how to do that. The idea of digital natives is a myth, but the idea of technologies designed to be user friendly and intuitive is not. ChatGPT is ubiquitous and increasingly user friendly. That does not mean we need to teach students “how to use ChatGPT” – that isn’t literacy, it’s compliance. And, frankly, anyone who wants to learn to prompt or talk to a chatbot can probably figure it out for themselves.
What I’m suggesting here is not necessarily resisting the technology. We’re almost at a point where that would be akin to resisting the internet or electricity in the classroom. Given the volume of students apparently using AI (which could be 60%, 86%, or 92% depending on your source), it’s not even the tech-companies hyping their product: students are really using AI as much as OpenAI tells us they are.
But the way we use technology, and the form the technology takes in the future, is not set in stone. The line between ubiquity and inevitability may seem thin, but it represents the difference between a technology that is simply widespread and one that has been accepted as necessary, unchangeable, and permanent. When we confuse ubiquity for inevitability, we surrender our agency and critique to market forces and corporate interests.
What makes technologies truly inevitable is not their inherent value or transformative potential, but our collective decision to stop questioning them.
Cover image: Yutong Liu & The Bigger Picture / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply