The AI headlines have been swept up in GPT-5, and whether it represents a monumental step forward towards Artificial General Intelligence (heads up – it doesn’t). But aside from the hype headlines, a few interesting things are happening around open source artificial intelligence, and even if you haven’t been paying much attention to generative AI beyond the big name brands like ChatGPT, I think this is something that you should take a look at.
Cover image: Suraj Rai & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
What is Open Source AI?
Open Source AI is often a bit of a misnomer, so it’s worth exploring what the term actually means.
Traditionally, open source software has its entire code exposed to the community so that anybody can freely adapt, examine, or remix the software. Open source is often associated with things like the MIT License, or Apache License, which allow users to make both commercial and non-commercial software out of the underlying code.
There are lots of great open source projects. The internet itself is basically an open source project, with widely agreed upon languages and open standards like HTML and HTTPS, which aren’t owned by any central body.
There are open source versions of most popular applications, including applications like LibreOffice, which offers productivity tools that replicate Microsoft Excel, PowerPoint, Google Docs and so on.
But in artificial intelligence, the term open source has become quite contentious. Certain companies, like Meta, have been making their AI models open source for a few years now, but many members of the open source community are concerned that the underlying architecture of the LLaMA models – including the training processes, which datasets have been used, and some of the underlying algorithms and techniques – are not actually open source at all. It would be impossible to reverse-engineer one of Meta’s LLaMA models in the same way that one might reverse-engineer a truly open source piece of software.
Most people reading this blog probably aren’t super interested in the difference between open source and open weights, but it is important to note that the tech companies have co-opted this term.
The Rising Importance of Open Source
Open source has always had a place in the technologies community, particularly in software design and the internet. But recently, open source has become incredibly important in geopolitical conversations.
The European Union, for example, has recommended member states prioritise open source as a way to unhitch themselves from US-based big tech companies. This has seen moves towards open source in places like German government institutions, or entire sectors in Northern Europe. Switzerland has recently worked on an open source large language model, which – as opposed to companies like Meta – is genuinely open source from the dataset through the algorithms and the training methods, all the way up to the weights and parameters.
China has seen a number of large-scale open source AI models released recently, including DeepSeek. And in the last couple of weeks, the US AI Action Plan has made recommendations that the US focuses on open source as a way to compete with its Chinese and European rivals.
Open source is a big deal, and open source artificial intelligence is now an incredibly important area of research and development in these technologies.
OpenAI Enters the Fray
OpenAI, ironically, started as an open source company, founded by Sam Altman, Elon Musk and others, as an independent organisation publishing open access research into deep learning and AI architectures.
I say ironically because, over time, and with heavy investment from companies like Microsoft, OpenAI has become more and more closed. One of the biggest criticisms levelled against OpenAI since 2022 has been that they have limited their research and only released proprietary models where the underlying architecture is obscure to the public.
In the past few days, OpenAI has started to redress this by releasing two open source models of its own: gpt-oss-120B and gpt-oss-20B. These are the first open source models released from OpenAI since 2022, back before GPT-3.5 (the original ChatGPT model).
Despite the hype around the release of OpenAI’s open source models, it’s not possible to run either of the models on most consumer laptops or devices. It certainly isn’t possible – as some commenters have suggested – to run even the 20-billion parameter version on a smartphone, unless you’re packing some serious hardware that I’ve never heard of. But it is possible to download and run OpenAI’s 20B and 120B models on a sufficiently powerful piece of hardware, or to run them via other open source hosting services.
More importantly, it points towards OpenAI’s bigger plans to continue to dominate both the open and closed source AI markets, with their main competitors being Meta and Chinese products from companies like Alibaba.
Why Should Educators Care About Open Source?
I’ve written a couple of posts in the past about why I believe educators should experiment with open source AI, so I won’t go over that territory again. But I would recommend that you read this article, which suggests a few ways to experiment with open source artificial intelligence.
Amongst them, I mention Ollama, which is still my preferred way to run AI on my device. Although my 2023, 18GB M3 MacBook Pro isn’t powerful enough to run even the smaller 20-billion parameter GPT-OSS model, it is more than capable of handling something like the 7-billion parameter LLaMA 3.2, or any of the numerous small models like Google’s Gemma Nano or Mistral 2B.
A general rule of thumb (though not an exact science) is that you should be able to run a model with half the number of billion parameters of your device’s RAM. So for an 18GB RAM MacBook, that’s up to a 9-10B parameter model at the top end. Again that’s not a hard and fast rule – Mistral large is a 73GB download for the 12B parameter model, and it grinds my MacBook to an absolute halt.
It is also true that some open source language models are so tiny that they will run on a phone. In that earlier article, I demonstrated using a couple of models on a four-year-old iPhone 14 Pro. It got a bit hot, but it did the job.
Ollama has just released its first desktop application, which makes it even easier to download and experiment with open source AI, provided you have a sufficiently powerful device. You can try both of OpenAI’s models (though they won’t work unless you’re running a high-spec device) as well as dozens of other open source and open weights LLMs.

Why Open Source AI is Important
The reason that all of this is important – other than the political advantages – is that it points towards an important aspect of the near future of the technology.
In the article The Near Future of Generative AI, I talked about large language models being pushed further and further towards lightweight, open source instances that run on-device. This is “AI everywhere”, and it is certainly the trajectory of the technology. Small, open source language models will make their way onto every consumer technology imaginable.
Phones are already being rolled out with AI built into the hardware. Laptops and personal computers are following suit, and it’s not difficult to imagine a near future where every electronic device – which, let’s face it, is practically every smart thing in our house – will have its own local, offline, large language model running in the background.
This might be good for a couple of reasons. Local large language models don’t have to rely on energy-hungry data centres and don’t send as much – or any – personal data over the internet.
The race to create the best, most efficient, most effective large language models in the open source world is a race to secure this on-device market.
With 700M users, OpenAI’s ChatGPT is still the most widely used artificial intelligence model in the cloud, but if they – or LLaMA, or Alibaba, Mistral, or any of the others – take the lead in producing open source AI, then it’s likely that their models will power the artificial intelligence sitting on every electronic device you own, whether it’s connected to the internet or not.
If I’m optimistic, this could mean increased accessibility to powerful AI, even on inexpensive consumer hardware, meaning that students and teachers won’t need to pay premium licences for cloud-based AI.
Of course, it could also mean that everybody has access to a cheap, accessible, and average artificial intelligence wherever they look, while those premium, proprietary, cloud-based models continue to dominate the top end and are only available to those that can afford them.
Either way, I think that open source is a good direction for the industry overall. I want to see more open source products. Not just ones from large tech companies like OpenAI and Meta, but the kind of thing that the government of Switzerland is working on: truly open source from the ground up.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

















Leave a Reply