This is part two of my “2024 near future updates”, where I’m exploring some of the technologies available now or on the near horizon which will impact education.
I started making these near-future predictions in the final chapter of Practical AI Strategies, released in January 2024. In the book, I discussed multimodal GenAI technologies such as image, audio, and video generation, as well as improvements in automated coding, which would make it easier for people with no technical skills to create simple applications.
In part one of this post, I explored three new areas on the horizon: deepfakes, pseudo-reasoning, and “general to specific” AI applications. You can read more about those concepts here:
In this post, I’m going to continue the exploration of AI-related technologies which I believe will coalesce and ultimately impact education in many and varied ways.
Offline AI
Artificial Intelligence products, including large language models, are so resource-intensive to build and require so much data and computational power that the largest, most capable models can only be built by companies with access to billions of dollars and significant infrastructure. This has meant that powerful models like GPT-4 and Gemini have stayed within the walled gardens of companies like OpenAI, Microsoft, and Google.
However, once built, a language model itself is not a particularly large or unwieldy piece of software. After training on petabytes of data, large language models are reduced in size to something which can fit easily onto the hard drives of most computers, or even the more limited memory of smartphones and other devices. Many companies, including some of the big technology developers like Meta, Microsoft, and Google, are also releasing models as open source, or perhaps more accurately, open weights (I’m not going to get into the distinction in this post).
The combination of open-sourced AI and increasingly efficient and smaller AI models leads to the first major implication for the near future of this technology: offline AI. A few weeks ago, I wrote a post explaining why I think educators should download and experiment with their own local AI models. The most powerful at the time of writing is Meta’s Llama 3.2, which comes in a variety of sizes: The 1 billion, 3 billion, and 8 billion parameter models are all just a few gigabytes in size, and therefore can be downloaded and run even on a slightly older mobile device, like my 2022 iPhone 14.
Larger models such as the 70 billion parameter Llama 3.2 can be run on more powerful devices, like a new MacBook Pro or PC, though they will place a strain on anything below the top-end.
The key to these models is once downloaded, they can run entirely offline and on device. There are huge advantages to this:
- An offline on-device model does not send data to the cloud or to its developing company
- Even if you’re using Meta’s product, the notoriously data-hungry company does not have access to your interactions with the local language model
- This could be a huge win for companies working with sensitive data, including in education, where we’re frequently handling information that should not be shared with the likes of Meta and Google
There are also implications for accessibility. Not everyone can afford access to the paid subscriptions required to use top-tier generative AI such as GPT-4o or Claude 3.5 Sonnet, and although these companies offer free tiers, they tend to be limited in use and quickly run out of credits. Running artificial intelligence locally on your device means you never hit token limits, never run out of credits, and never have to pay a subscription fee.
There are even accessibility benefits in terms of using the technology in diverse locations, such as regions with limited or no internet access, although, of course, you still need access to sufficiently powerful technology to run the AI.
The implication of offline AI for the near future, though, is far more pragmatic. Companies like Apple and Google are already installing small on-device language models that sit behind voice assistants like Siri and Gemini. These provide significant improvements to the performance of these voice assistants, and because it’s running on device, the technology is potentially faster and more efficient.
As language models continue to develop in capability, such as Llama 3.2, and shrinking size, like Google’s and Microsoft’s models, we will find artificial intelligence baked into many digital devices. I’ve joked that we’re 12 months away from chatbots being installed onto electric toothbrushes, but honestly, I’m not really joking that hard. Think about the implications when every sufficiently powerful device comes with artificial intelligence built in, whether we want it or not.

The Practical AI Strategies online course is available now! Over 4 hours of content split into 10-20 minute lessons, covering 6 key areas of Generative AI. You’ll learn how GenAI works, how to prompt text, image, and other models, and the ethical implications of this complex technology. You will also learn how to adapt education and assessment practices to deal with GenAI. This course has been designed for K-12 and Higher Education, and is available now.
Wearable Technologies
At the 2024 Meta Connect event, Mark Zuckerberg demonstrated a pair of augmented reality sunglasses named Orion, containing their most powerful AI and the ability to project images through the lenses that are as high-fidelity as leading virtual reality headsets. The Orion sunglasses can be controlled by voice and with the use of a haptic band that tracks gestures. They have cameras so it can see the world and use Llama’s vision model to report back to the user. Unfortunately, they cost a reported $10,000 to manufacture and are currently purely a technical demonstration

You don’t need to fork out $10,000, however, to see the near future of these technologies. You can just head down to your nearest Ray-Ban store and pick up a pair of the Meta x Ray-Ban Wayfarers for a few hundred dollars. I purchased a pair in Australia a few months ago, and at the time, they did not have the capability to connect with Meta’s artificial intelligence. That restriction has recently been lifted, and so now I have a wearable device that, whilst lacking a visual display, can use the front-mounted camera to “see” the world and the Llama 3.2 image recognition technology to tell me what it means.
The Meta Ray-Bans will talk to you through the conductive speakers in the frames (the sound quality is excellent, by the way, and I’ve started using them as a replacement for headphones when I’m out and about and want to leave my ears uncovered). The camera is also excellent quality, not quite up to par with the rear camera on an iPhone, but certainly good enough for snapping the occasional photo. It’s also good enough, combined with the AI, to provide relatively meaningful contextual information.
Take these couple of examples: In one, I’m asking about the ripeness of a banana and then prompting for a smoothie recipe. In the other, probably more useful example, I’m translating an Italian poem into English in real time.


There are many companies dabbling with AI wearables, from creepy pendants that pretend to be your friend through to the laughably optimistic Humane Pin, a tech startup which quickly crashed and burned once it became clear to them that people didn’t want to walk around projecting laser text onto their hands from an overheating lapel pin.

We will no doubt see more wearable technologies, but of all the form factors, I think that glasses will be the most useful. Many people already wear glasses, of course, but even for those that don’t, speaking into the void and having your voice picked up by the in-built microphones around the bridge of glasses like the Meta Ray-Bans feels much more natural than, for example, talking into a smartwatch.
Once you get used to the fact that you look fairly bonkers walking around and talking to your sunglasses, I found that the Ray-Bans can be used both to converse with the AI and as a substitute for Bluetooth headphones when taking a call or an audio zoom. I don’t wear glasses, but the Ray-Bans don’t feel like an imposition. In fact, in terms of the sensory experience, it’s actually nicer than wearing a pair of AirPods for an extended time.


Who’s this cool looking guy walking around the lake…talking… to himself…? Left: selfie taken on iPhone wearing the Meta x Ray Ban Wayfarers. Right: image of iPhone during a zoom call taken with the Ray Bans.
We will see a convergence of these first two technologies: offline AI running on powerful wearable devices. No internet required, no data sent to the cloud, no subscription, just AI on your face all the time. Imagine that.
AI Agents
AI agents are the new holy grail for tech companies. Having harvested all of our data for the last 20 years, using that data to build these capable large language models, the next step in the technology playbook is to deploy them to act on our behalf.
I’m using the term “AI agents” to cover broadly any artificial intelligence which can act autonomously, carry out multiple tasks on the user’s request, and interact with other tools and technologies.
AI agents are still not sentient, and they are not AGI, but they are the next step for companies like Anthropic, OpenAI, Microsoft, Google, and Apple.
What does an AI need to be agentic? It needs:
- The capacity to act in the physical or digital world with some degree of autonomy
- Vision capabilities (for example, to take screenshots and read a user interface designed for humans)
- Coding capabilities to run terminal commands, install software, tackle complex problems
- The power of a large language model for comprehension, comprehending what it sees
- A bucket load of safety features and security guardrails to stop it from going rogue
We’re not there yet. I made a post recently about one such AI agent called Claude Computer Use from the company Anthropic. Computer Use is a demo feature currently only available through the developers portal, and is about as buggy and risky as you might expect.
In that article, I talked about this being the “ChatGPT moment” (so November 2024), and what I meant was that prior to the release of ChatGPT, we had a lot of buggy, reasonably hopeless applications built using GPT-2 and GPT-3 technologies. In education, certainly not many people were paying attention to them, and then ChatGPT nailed the user interface by slapping a chatbot on top of their most powerful model and releasing it into the wild.
In the coming months, we will see the clunky, error-prone, limited functions of something like Computer Use released in its first commercially viable form, perhaps as the proposed Jarvis from Google, an AI agent developed for the Chrome browser, which can browse, shop, and explore the internet on your behalf. Or maybe it will come from Microsoft via a product like Copilot Vision, which can already “see” web pages and discuss them.
Since writing that original post on Claude Computer Use, I’ve done a few more experiments, first of all sending Claude off to purchase a pair of Levi’s jeans on my behalf. It was…okay. I asked it to find the cheapest pair, and it went straight to the Levi’s website (unlikely to find a discount there). It also misinterpreted the colour of the jeans and changed them from black that I had requested to a navy blue. Maybe it was just questioning my taste and style, but the experiment shows that it’s certainly possible, if not yet practical, to send an AI out to shop on your behalf.
It’s also the first use case that I came up with where I started to see the point of this near-future technology. Follow the dollar signs, because for all of the talk of automation and efficiency, workload reductions and other benefits, the bottom line for generative AI technologies is for them to gather more data and make more money for the companies that produce them.
So of course, the most logical use of an AI which can act on your behalf is not to take over your computer and write complex code or use sophisticated tools to carry out your mundane, everyday tasks, but online shopping. Presumably so that we have attention to spare for using all of your other devices, or does the shopping on your laptop while you scroll TikTok on your phone.
Despite scraping the web clean of data, the game isn’t over yet. We still live in a data economy, and these companies are keen to produce tools which make it even easier for you to consume and contribute more of your data to that environment.
Implications for Education
Like the previous post, I’d like to end on discussion of the implications for education (although, of course, these technologies will impact every sector).
Since the release of ChatGPT, we’ve been morbidly obsessed with students using AI to cheat, and some education providers have gone as far as trying to limit access to AI by banning ChatGPT or writing strict and punitive academic integrity policies. What does banning AI look like when it’s running offline, on a pair of glasses or maybe a ring connected to a tiny Bluetooth earpiece? Or when it’s possible to deploy an AI agent from a laptop or smartphone that can navigate the user interface of a learning management system by itself?
It’s worth sharing again one of the videos from my previous post on Claude Computer Use, showing that even in this rudimentary state, it has no problems with navigating an LMS and making submissions on the user’s behalf.
We moved away from the traffic light colours of our version one AI Assessment Scale, because stop-slow-go doesn’t work. I don’t want to sound techno-deterministic, but at this stage, there really is no stopping, banning, or fully controlling artificial intelligence. There is no way to guarantee that a student has not “cheated” using AI in most tasks where students have access to a computer.
But just as ChatGPT didn’t spell the death of the essay or destroy the education system overnight, we can be resilient in the face of these near-future technologies. To do so, we first need to acknowledge that they’re on the horizon, or, in some cases, already exist.
This is about awareness raising. If you’re reading this article, congratulations, you are now aware. Share this article and the previous with your colleagues. Talk about the six areas of near-future generative AI and the implications for your subjects and institutions.
If you work in an online environment with K-12 distance education or a university with many, or perhaps even all of your courses online, you need to take a step back and look across the broad horizons of generative AI. Shift your focus away from ChatGPT for a moment, zoom out and imagine your institution in three or five years’ time, once these technologies have matured.
Many educators are angry that these technologies have been released without consultation and seemingly without concern for the implications. I don’t think companies like OpenAI, Microsoft, and Google are unconcerned about the implications of AI for education. I think they know full well how disruptive these technologies are, and they are frantically trying to compete for the best ways to exploit that disruption.
That doesn’t mean we need to let ourselves be exploited. Educators can wrestle control of this narrative back from the technology companies. We can reject the myth of personalised learning and the necessity of data surveillance.
But as November 2022 showed us, we cannot afford to be caught off guard again.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply