In December this year, the blog hit a milestone of 1,000,000 reads. Thanks to everyone who has joined me on the journey in 2025! It turns out the pace of AI didn’t slow down at all this year… I’ve spent the past three years writing about the technology and its implications in education, and the past twelve months has felt as frenetic as ever.
In this article, I reflect on some of the highlights from the blog, looking at trends in the technology, how attitudes towards AI hype have shifted, and where I think the industry is taking us in 2026.
New Tech Trends: Deep Research, Agents, and Vibe Coding
One of the most notable features of GenAI for the past three years has of course been the relentless pace of development, with all of the major players releasing update after update. In 2025, this included the release of the much-hyped (but largely disappointing) GPT-5,
OpenAI released a research and Pro-tier preview of their computer-using agent “Operator” in early 2025. By the end of the year, AI Agents were all the rage, and OpenAI had commercially released their new product (now called “Agents”) in every paid tier and in the new Atlas browser. I remain unconvinced about the utility of these so-called Agents beyond low-level “fetch and carry” tasks like completing online shopping and booking hotels, but with the billions being poured into them, I’m sure we’ll continue to see advances in 2026.
Deep Research made its way into the mainstream in February, with OpenAI, Perplexity, Google and Anthropic all releasing variations on the “thinking” applications. ChatGPT’s was the first, and arguable most successful, cab off the rank, followed shortly by the Gemini 1.5-flavoured Google version. I did a number of posts over the year focusing on Deep Research products, including this early comparison of the leading companies:
This closing quote from that article became one of my most-shared thought-nuggets over on Bluesky, obviously resonating with the (admittedly disproportionately anti-AI) crowd on that platform:
The only conclusion I could arrive at is that it is an application for businesses and individuals whose job it is to produce lengthy, seemingly accurate reports that no one will actually read. Anyone whose role includes the kind of research destined to end up in a PowerPoint. It is designed to produce the appearance of research, without any actual research happening along the way.
On the plus side, we are starting to see some genuinely useful applications of GenAI in Education. many teachers have been exploring “vibe coding” in familiar apps like Canva, and the ability to create increasingly powerful applications continues to grow with products like Google Gemini 3 Pro.
Teaching AI Ethics
Early in the year I relaunched my popular Teaching AI Ethics series. The original series of articles from 2023 are still available here, and well worth a read if you’re interested in the fundamental problems of the AI industry.
Throughout 2025, I’ve been steadily updating each article with new case studies, resources, and teaching ideas. All of the Teaching AI Ethics materials are released under a CC-BY-SA-NC license and free for educators to use, remix, and share. I’ll continue to update the series in 2026, and will also be releasing a complete Teaching AI Ethics course over on the Practical AI Strategies website.
Start here:
- Teaching AI Ethics: Bias 2025
- Teaching AI Ethics: Environment 2025
- Teaching AI Ethics: Truth 2025
- Teaching AI Ethics: Copyright 2025
- Teaching AI Ethics: Privacy 2025
- Teaching AI Ethics: Data 2025
Stay tuned in 2026 for articles and resources on the topics of emotion recognition, human labour, and the complex power structures of the AI industry.
Subscribe to the mailing list
As internet search gets consumed by AI, it’s more important than ever for audiences to directly subscribe to authors. Mailing list subscribers get a weekly digest of the articles and resources on this blog, plus early access and discounts to online courses and materials.
No data sharing, no algorithms, no spam. Unsubscribe any time.
Debunking Myths and Deflating Hype
Toeing the line between using and resisting AI is always a tightrope act, but it would be hypocritical of me to tell anyone what not to use given I frequently use many multimodal GenAI products myself. However, I have poured a lot of time and energy over the past three years into debunking some of the persistent myths about AI, and trying to deflate the preposterous levels of hype surrounding the technology.
Here are a few of my favourite articles from 2025 designed to ground conversations about AI in reality:
In Resist, Refuse, or Rationalise, Just Don’t Roll Over I discuss the growing number of educators pushing back against the narrative that we must use AI in education. Personally, I still think there’s a place for such resistance, and if that place isn’t in schools and universities, I don’t know where it is.
It’s also vital for educators to push back on the Myth of Inevitable AI, which mostly serves technology companies. If we’re lulled into thinking that Sam Altman and co.’s version of the technology is the only way, then we’ll miss out on a lot of what AI has to offer. Steering education away from Big Tech narratives of inevitability might free us up to explore open source AI, AI for accessibility, and other potentially fruitful (but much less hyped) avenues.
In 2025, we reminded tech companies that core parts of education such as lesson planning aren’t just “low hanging fruit” to be delegated away to AI systems. In fact, I strongly believe that “time saved” is the wrong measurement for using GenAI in schools, and spent a lot of time this year writing about alternative ways to focus on the technology.
And finally, in 2025 some of the tech companies pushing GenAI really started to show their colours. OpenAI in particular became more and more aggressive with its manoeuvres into education, going as far to suggest policies that would impact education. As I wrote in July, We Cannot Let OpenAI Write Our Education Policy. The company is certainly coming for education, with free products for students and teachers, partnerships with preeminent edtech companies like Instructure (the Canvas LMS), and products like “Study Mode” targeted directly at students.
Simultaneously, OpenAI made several pledges to reduce its guardrails, rather than strengthening them. This included the ability to generate “more graphic” images in ChatGPT, and a promise from Sam Altman that the platform would allow explicit content, probably in Q1 2026.
Big Ideas: Frameworks, PD, and More Ways to Use GenAI
As usual, I split my time in 2025 between telling people why they should avoid GenAI, and how they might use it. Again, I appreciate the irony, and generally feel like a 90s teen magazine (10 reasons boyfriends/girlfriends suck, and how to get one…).
Of course, the AI Assessment Scale continued to dominate a lot of those conversations. In 2025 my coauthors and I moved all of the AIAS materials from this blog to their new permanent home at aiassessmentscale.com, which now includes all of the open access articles, free resources, translations, and commentaries about the Assessment Scale.
I spent much of my time in 2025 Talking to Teachers About GenAI in Schools, and learning What Educators Want to Know About GenAI. One major area that kept coming up was the need for clear professional learning aligned to existing school values. Of course, that’s not exactly straightforward when there are so many competing priorities. I wrote two articles on the topic of “expertise”, one of which focused on why PD for GenAI can and should be folded into discussions of domain knowledge. In short: schools need subject matter experts more than they need “AI experts”, and so PD about AI should largely be covered in the disciplines and through a clear pedagogical lens.
One of the biggest shifts in my attention in the past three years has been a move away from “prompting” and towards processes. I never quite believed in the future of “prompt-engineering”, and over time we have seen models continue to improve and the need for perfect prompts continue to decline.
But the importance of having clear processes for using GenAI platforms continues, and in the June article Process > Prompts I sketched out a few ways to approach multimodal GenAI, reasoning models, improved internet access, and other features of leading models. In a later post, I outlined a clear approach for working with any of these applications, which ultimately turned into a new flagship course over on the Practical AI Strategies website:

In 2026, I’ll be updating my Rethinking Assessment for Secondary Education course with a complete series on assessment design, including these 5 Principles and a number of ways to learn About, With, Through, Without, and Against AI in schools. These resources are based on work with hundreds of educators in workshops across Australia and worldwide.
Many institutions moved away from strict enforcement of AI bans in 2025, but there were still plenty of heated conversations about so-called AI detectors, student “cheating” behaviours, and AI misuse. In 2026, I hope we can continue to push away from these discourses and into more robust discussions of what education looks like with (and without) AI.
And finally, there have of course been new national frameworks, both inside and outside of education. On the blog, I’ve written about two of the most important recent frameworks: The National AI Plan and the Australian Framework for AI in Higher Education:
Where Next for 2026?
I’ve made a number of predictions over the years, and most of them have been accurate. Partly that’s because I stick to 6-month timelines: there’s no real telling what is going to happen much further out than that. It’s also because I’m keenly aware that GenAI is Normal Edtech, and as such follows normal adoption curves and challenges, even if the rate of development is crazy.
I think that the GenAI Bubble Will Burst at some point in the (near-ish) future, and that when it does it will have serious effects in education and most other industries. For now, though, there will continue to be advances in a few obvious areas, including AI Agents, multimodal GenAI, and the continuous arms race between the biggest developers.
In 2026, I’ll publish the remaining 3 “near future” predictions, including the increased global focus on so-called Sovereign AI.
Beyond these major predictions, I have a few ideas about what is next for 2026. Schools and universities will still be scrambling to update policies and guidelines. Students will continue to use GenAI in ways which defy expectations (and sometimes rules), but they’ll also increasingly join the ranks of people refusing to use GenAI for their own ethical reasons. Educators will need professional development, but they won’t be given any extra time or resources so we’ll need to think of increasingly novel ways to deliver PD about AI.
I’m sure 2026 will feel as frantic as ever where AI is concerned. Wherever you’re joining me from, thanks again for reading the blog and I hope you have a restful holiday season. See you next year!
Subscribe to the mailing list
As internet search gets consumed by AI, it’s more important than ever for audiences to directly subscribe to authors. Mailing list subscribers get a weekly digest of the articles and resources on this blog, plus early access and discounts to online courses and materials.
No data sharing, no algorithms, no spam. Unsubscribe any time.

Leave a Reply