What happens when Generative AI disappears into the woodwork?

As part of my PhD studies, I read and write a lot of stuff that doesn’t really fit into my research, but which I find interesting anyway. I’m categorising these “spare parts” on my blog, and if you’re interested in following them you’ll find them all here.

At the moment we’re still in the thick of the Generative AI hype, with social media posts and articles telling us things like “you’re using ChatGPT wrong” and “here’s how you can maximise your profits and automate your business using AI”…

In my field of education, we’ve seen everything from “AI will be the death of the essay and high school English” through to “AI will revolutionise the education system”. History, and common sense, suggests that neither extreme is true. We know that the education system is remarkably resilient, and resistant to change. We also know that technologies often fail to live up to hype, and can take years to fully integrate into society (yes, even the iPhone). 

But in a way, I think the hype is healthy. For one, it’s raising people’s awareness of a complex and formerly obscure technology. Generative AI has existed in various forms for years but it has never been so accessible, and you can’t argue that people are using it – and talking about the implications – much more since ChatGPT’s launch. 

All the hype is raising international awareness of the legal, societal, and ethical implications of the broader implications of artificial intelligence too. Books like Kashmir Hill’s Your Face Belongs to Us and Kate Crawford’s Atlas of AI are flying off the shelves. People want to know more about the dark side of AI, and that can only be a good thing. 

In fact, it’s especially important that we have these conversations now, while we’re in peak hype. Because against all the flashy new apps and “built with AI” businesses, the big players are working hard to make AI fade into the background.

question marks on paper crafts beside coffee drink
Photo by Leeloo Thefirst on Pexels.com

Into the woodwork 

Vincent Mosco’s 2004 book The Digital Sublime discusses the development of “cyberspace” in the 1990s. While that term makes the book sound frightfully old fashioned, Mosco offers some incredibly useful insights that can be applied to what’s happening right now with Generative AI. 

Mosco compares cyberspace to electricity, arguing that the Internet is more of an infrastructure and a system than a physical entity. Like the introduction electricity, the beginning of the internet was slow, requiring new hardware and successive generations of technology before it was able to be safely installed in people’s homes. 

Once electricity became ubiquitous, however, it literally receded into the woodwork, with wiring and infrastructure disappearing into the spaces between walls. The same, Mosco argued, happened with cyberspace. Digital divides notwithstanding, in many places people hardly think of using the internet – it’s just something that’s there. Smart watches, smart phones, even smart fridges provide (almost) seamless access to the internet, and for all intents and purposes it has vanished from consciousness. 

In a recent paper with Jason M. Lodge, Suijing Yang, and Phill Dawson, we extended the internet/electricity analogy to include Artificial Intelligence. The article, titled It’s not just a calculator… suggested we need new ways to conceptualise AI beyond thinking of it as a simple tool: “a new calculator” or “a new iPhone”. Instead, it’s a new electricity, a new phone network, a new internet: it’s a new system and network that will come to pervade our lives (in many ways, and referring to technologies more broadly than generative AI, it already does). And like these other systems, it will recede into the woodwork. 

If you’re enjoying these articles, join the mailing list to stay up to date. there’s no newsletter, just occasional emails with new blog posts and courses related to Generative AI:

Processing…
Success! You're on the list.

As the magic fades…

Now the initial wave of hype brought about by ChatGPT is fading, it’s interesting to see what the big companies are doing with generative AI. Microsoft and Google, for example, have been neck and neck in digital technologies for decades, and the current “AI arms race” is no exception. Microsoft funded OpenAI to the tune of $10 billion to access their GPT-4 model. Google has its own, PaLM, and has invested heavily in Anthropic’s rival language model, alongside Amazon who also added $4 billion to the pool. 

But while these huge companies are currently very flashy about their AI services, there are already signs of generative artificial intelligence being integrated more seamlessly and invisibly into their products. The language around these products is shifting from emphasising the “magic” and “superpowers” of AI and towards much more neutral terms like “collaborator”, “assistant”, and “Copilot”.

The focus is also moving from feats of superhuman prowess towards efficiency and productivity, and in the case of Meta, “creativity” and “connection”. As the public consciousness over the uses and ethical concerns of AI expands, the way these companies talk about their systems has started to subtly change.

In another example, Apple has been very quiet about their own AI, referring to “machine” or “deep learning” when they absolutely have to, or not mentioning it at all. It deliberately distances them from the AI hype of their rivals, and plays into the minimalism we’ve come to expect from Apple. But it also means that when people use Apple products, they won’t be thinking about AI. They won’t be wondering which language model sits under the new predictive text functions, or the dataset used to train Siri’s upgraded voice recognition or the new Personal Voice and Live Speech accessibility features. 

On November 8th I’ll be running a webinar on how educators can use image generation in their day-to-day work. Check it out on eventbrite.

So who cares?

So who cares if AI recedes into the woodwork? Well, that’s exactly the point. Because like the climate impact of the electric grid, or the geopolitics and environmental concerns of the storage centres that power the internet, or the monopolising of the telephone networks, once these things become part of daily life people stop caring. And they stop asking questions. 

Already, we’re seeing AI products rolled out in businesses, schools and tertiary providers. Compare the current status to a few months ago, when people were asking some pretty serious questions about whether ChatGPT should be allowed anywhere near schools. Some of that was fear, but some of it was a reasonable response to OpenAI’s poor track record with ethics and respecting intellectual property. Many of the products entering work and education are only one step removed from OpenAI’s model, built directly on top of GPT-4. And yet they’re already starting to seem much more acceptable.

The same is true for Meta’s AI products: their Llama model contains the now infamous book3 dataset, and their image generation model Emu is arguably as problematic as Stability AI’s Stable Diffusion or Midjourney. But we’re just so used to using Facebook Messenger, Instagram, and WhatsApp, and when AI is integrated seamlessly into all those familiar products, then people will stop asking questions. 

Closing the window on AI critique. Model: Bing Image Creator (DALL-E 3)

The window is closing…

The window for critique is closing fast. Right now, journalists, educators, politicians, CEOs, thought leaders, and academics are all shouting pretty loudly about the ethical concerns of AI. Soon, it’ll be just a handful of dedicated reporters and those who base their academic or corporate careers around AI ethics. 

We’ve seen this exact pattern with technologies before, from electricity, to phones, to the internet. We’ve seen it happen in front of our eyes with social media and edtech. And if we’re not careful, AI will soon be invisible too.

If you have any questions, comments, or feedback on this article, get in touch using the form below:

Leave a Reply

%d bloggers like this: