It has been a couple of years since the release of ChatGPT threw the education sector into a tailspin, with OpenAI’s chatbot racking up hundreds of millions of users worldwide – many of them students. But the passage of time doesn’t necessarily mean we have more of a handle on the technology now than we did in 2022. If anything, the pace of development in the past two years has meant that educators have struggled to adapt, and I am still being called into schools and universities to introduce GenAI.
It’s hardly surprising: educators have other things to worry about, and are notoriously time poor and under-resourced for professional learning. In my discipline – English – we have a curriculum that has barely been able to keep up with digital texts in the form of blogs, social media, streaming, and video games. All of a sudden, we have an entirely new mode of multimodal text production to contend with. Nobody asked for it, and nobody planned for it.
But it isn’t going away. Both Google and Microsoft have made the decision to increase their subscription fees for their applications and fill them chock full of GenAI. Whether you want it or not, Gemini and Copilot will soon occupy the sidebar of Google Docs, Slides, Sheets, and all of the Microsoft Office applications. The “keep AI out of the classroom” ship has long since sailed, unless you’re committed to doing everything on pen and paper.
And while the education sector as a whole has been typically slow to react, there have of course been pockets of adoption and experimentation. Every school I work with has a handful of staff who are using GenAI for everything, from curriculum design to teaching. Sometimes, it’s happening in secret. “AI guilt” is still a thing, and many employees feel like they’re getting away with something when they use ChatGPT to help plan their lessons. Students, of course, are using GenAI in all kinds of ways – some good, some bad.
This post is a (re)introduction to GenAI for those educators who still feel like they’re behind, or for those who experimented in the early days since ChatGPT’s release, but have not gone back to the technology. It’s also for those who have tried to keep on top of every update and advance, only to find that the arms race between Google, Meta, Amazon, Apple, Microsoft and OpenAI has left their heads spinning.
It’s a long one! Bookmark this page, browse through the table of contents below, and share it with your colleagues. Wherever you’re at with your understanding of GenAI in schools, hopefully you’ll find something useful here.
Table of Contents
What is Generative Artificial Intelligence?
A lot of the hype and misinformation around Generative AI stems from a lack of understanding of how these systems work. Much of that is down to deliberate smoke-and-mirrors from tech companies who would prefer you to believe that GenAI is magic, but you don’t have to be a software engineer (or a wizard) to grasp the basics of Large Language Models (LLMs) like GPT. Despite the tech company hype, these predictive AI models aren’t sentient, and they’re not about to suddenly “wake up” and become conscious…
Without getting too bogged down in the details, Generative Artificial Intelligence is the term used to describe a set of technologies which can be used to generate content based on training data. That data can be text (including programming languages), image, audio, or other forms and is used to train a system to predict and generate new output.
One of my early blog posts explains the training processes in more detail through the analogy of the AI Iceberg, which always features in my presentations.
GenAI Ethics and Education
In 2023 I wrote an extensive series called Teaching AI Ethics which covered nine areas of ethical concern, from bias to human labour, environmental impact to copyright, and others that I identified as particularly relevant to the education sector. That original series is still one of the most visited pages on this site, and this year I’ll be updating all of those areas.
Until the updates, this brief introduction flags the most pressing concerns that come up when I talk to teachers and students.
Environmental Concerns
By now, most people are aware of the environmental impact of Generative AI. There have been many news headlines – especially given the recent climate catastrophes worldwide – on the increasing power and water demands of AI. Companies like Microsoft and Google have blown past their sustainability targets and are even turning to nuclear power as a potential solution to the increased energy burden of AI.
But the issue is not as simple as the headlines make it appear, and it is very difficult to get an accurate understanding of how much energy these models consume in training and everyday use. Educator Jon Ippolito recently wrote an article in an attempt to find up to date information on energy consumption, and highlighted the obfuscation by developers and conflicting reports. However, he also points out that AI power consumption – though high – is still only a fraction of overall data centre power usage, and that data centres contribute to 2% of global energy demand.
While it’s of course important to understand and challenge the energy and water demands of AI, it’s worth bearing in mind that other technologies, including streaming, internet use, and the manufacture of smart phones and other devices, contribute much more to global energy consumption.
Copyright and Intellectual Property
GenAI is a contentious technology, and the industry has been beset by legal battles from the outset. Image generation has been the main site of these struggles, with many high profile class action lawsuits in the US still being fought. Companies like Stable Diffusion and Midjourney have been accused of unlawfully scraping the intellectual property and copyrighted works of artists, photographers, and illustrators, and then creating a product which operates in direct competition with them. The technology companies’ defence generally centres on the interpretation of copyright law in the US, which allows for training machine learning systems.
OpenAI and Microsoft have been sued by authors and publishers, including in a case from the New York Times alleging that GPT can produce verbatim output from NYT articles. That case is still ongoing, and like the image generation cases is unlikely to be resolved soon. From a technical standpoint, OpenAI’s response was to change the way ChatGPT produces output to make it harder for a user to reproduce content from the training data: but this is still at odds with their claim that AI doesn’t “learn” the data it is trained on.
And in audio generation, Universal Music Group, Sony, and others have sued AI companies Suno and Udio for obvious appropriation of copyrighted data in their models. Again the defence from the tech companies is “fair use”.
These are complex legal battles, and personally my main issue is that they are being fought on the premise of one country’s legal definition of copyright but impact global artists and creators. In Australia, for example, “fair dealing” is quite different from “fair use”, but if a US judge rules that the technology companies are training legally then it doesn’t matter if Suno has trained on Australian musicians, or OpenAI on Australian authors.
Not all GenAI is created equal. Adobe, for example, attempted to address these issues by initially training on Adobe Stock images (though arguably without the full consent of contributors) and open source. But once they opened up their stock library for royalty payments, many people uploaded images generated in other, less “ethical” platforms like Midjourney. Essentially, Adobe Firefly might be trained on “laundered” AI images. Still, it’s probably the most ethical attempt so far from the big commercial players.
If you’re interested more in the ongoing arguments about data sourcing, it’s well worth checking out Ed Newton Rex’s Fairly Trained.
Bias and Representation
When I speak to schools about GenAI, bias is generally the first ethical issue I cover. After the release of ChatGPT it quickly became apparent that these models could be prompted to create racist, sexist, and otherwise discriminatory output. Over time, the developers have put in place guard rails and safety features to reduce this risk; but the bias is still in there.
The bias mostly comes from the dataset and subsequent training. With a language model like GPT, the majority of the training data is in English and scraped from the internet. Internet user demographics means that the bulk of this text is written by white, male Americans, and this influences the output of the models. When the developers themselves are also in that demographic, training processes might compound the “worldview” of the model. Offensive and discriminatory content may also enter the system through the data, especially when it is scraped from unfiltered sources like Reddit and social media.
While OpenAI, Microsoft and co. claim to have made efforts to remove bias from their products, the issue has also become a political minefield. Elon Musk, who owns the Grok AI model, has been complaining about “woke” AI since ChatGPT’s release. Now, given his political connections to Donald Trump, it seems that view is taking hold. OpenAI has “quietly removed references to politically unbiased AI” from its policies and Mark Zuckerberg “has gone full MAGA“, which will almost certainly have implications for Meta AI. The Llama language model from Meta is already the most widely used open source GenAI model.
Centralisation of Power
Tying many of these issues together is something much more troubling: the centralisation of power that is reinforced by AI. In order to produce an AI model, a developer needs access to lots of data, lots of computational power, and lots of money. OpenAI began as a small nonprofit organisation, but a huge cash injection from Microsoft enabled it to train the first ChatGPT model. Meta AI is installed on billions of devices through Facebook, Instagram, and WhatsApp. Google’s Gemini model is being built into every Google-owned software and hardware. Apple has partnered with OpenAI to add ChatGPT to new iPhones. Amazon is funding Anthropic and building its own powerful AI systems. And Alibaba is producing its own GenAI in competition with US counterparts.
The one thing all of these organisations have in common is that they are already the wealthiest companies in the world. The arms race to develop and deploy GenAI is putting more and more power into the hands of the already powerful. In education, these companies hold huge influence. Most schools in Australia are either a Microsoft or a Google School, relying on Microsoft 365 or Google Workspace for apps used by both students and teachers. These companies have access to school data, and can even influence curricula and government policy. They are partnering with state and federal governments worldwide to train “education friendly” chatbots.
There are serious questions to be asked about how much power the education sector wants to continue handing over to these companies with the increased pressure to incorporate GenAI.
Recent Updates and Advances
It is easy to take all of these ethical issues, and adopt the position that there is no ethical way to use Generative AI. But that is an overly simplistic perspective, and one that, ultimately, won’t be very practical for students. AI is problematic, perhaps even dangerous, but that shouldn’t be used as an excuse for blindly avoiding the technology. I’ll write more about conscious resistance and refusal later in this article, but it is also necessary to look at the developments since 2022: AI can probably do more than you think.
Text Generation
When ChatGPT was released, it used a large language model called GPT-3.5. At the time, this was a state of the art language model, and much more capable than OpenAI’s GPT-3 or any of the competition. Then, in March 2023, OpenAI released GPT-4, and there was a noticeable improvement in quality.
Now, GPT-4o is the “free” model available to all users, but the competition has also caught up. Even open source models like Meta’s Llama and Alibaba’s Qwen out perform the original GPT-4 on many tasks. The implications of that are significant, and I’ll write more about open source AI in the “Near Future” section at the end of this article.
AI text detectors have never worked, and that hasn’t changed as the models have become more powerful. The quality improvements since 2022 also mean that it is not possible to detect AI generated text by eye. Although there are many basic AI “tells” (words like “delve”, emdashes, strange formatting, US spellings, etc.), these are trivial to avoid for a sufficiently skilled user.
Image Generation
If you compare the output of OpenAI’s DALL-E 1 (August 2022) to the current DALL-E 3 model, you can instantly see the improvements in two years. But once you go beyond DALL-E (also used as the image generator in Microsoft Copilot), it becomes clear that image generation has come even further than many people realise. The following images all use the basic prompt “photo of two students talking in a classroom”. The Google image prompt blocks the word “student” so that one is “people” instead.





Top: DALL-E 1 August 2022 (left), DALL-E 3 January 2025 (right)
Bottom: Flux Dev (left), Google ImageFX/Imagen 3 (centre), Midjourney (right)
Training and intellectual property concerns aside, many people now believe that photorealistic, undetectable AI images are basically a “solved problem”. Think about what this means for education. On the one hand, it is easier than every before to create visuals and resources. On the other, it is almost trivial to create convincing fake images, and GenAI is already being used extensively for pornographic, violent, or misleading deepfake images.
See if you can tell a real image from a fake one. Over 70,000 people have played this game since I released it in late 2024, and only a handful have hit a 10/10.
Video Generation
The next frontier from image generation is video, and this is still an area that will need to see a lot of improvement before it is “production ready”. AI generated videos are obvious: they tend to be short, uncanny, and occasionally ridiculous. But, like all of these other offshoots of the technology, they are improving constantly.
Still, OpenAI’s Sora, which was demonstrated as early as February 2023, has only just been released. It is also very far from a working product, and the videos are often… terrifying. Here’s an example of the following prompt: A live stream of an unboxing of an amazing new tech product, a man opens the box and pulls out a hi-tech device, shot on an DSLR camera.
Other video generators such as Runway similarly struggle with real world physics and plain old common sense, with backwards-walking people, strange anatomy, and dissolving body parts a common feature.
Video generation will continue to improve, and we will reach a point where – like image generation – video output is undetectable without sophisticated “digital forensics”. There might be benefits for creators and educators, including creating on-demand, engaging resources. There will also be serious social impacts as nothing we see online can be trusted. Either way, we need to be prepared for what is coming over the next 12-18 months.
Audio Generation
As discussed above, some of the biggest battles in GenAI and copyright are being fought over audio generation, and especially the generation of music. But while these cases slowly move through the courts, the technology companies will continue to develop their products. Voice cloning, music generation, and sound effects are all already possible with GenAI.
ElevenLabs is a powerful voice generator capable of producing both AI voices and realistic “clones” of a user’s voice. Suno and Udio can produce songs – both instrumental and with (often terrible) lyrics. Adobe is working on GenAI for movie soundtracks and effects. Google has a number of beta products which can produce both music and sound effects. Even ChatGPT has a voice mode which uses a “speech-to-speech” model to create a believable, humanlike AI.
Here are two “songs” with the same basic prompt in two genres:
Australian country song about artificial intelligence gobbling up the education sector and replacing teachers with chatbots
Post hardcore progressive prog song about artificial intelligence gobbling up the education sector and replacing teachers with chatbots
Code Generation
One area that I have always thought is promising for AI in education is the use of GenAI for coding and creating applications. I don’t mean that every educator should be a software developer, but as these tools increase in accuracy and quality, it becomes more and more possible for educators to create their own web pages and apps on the fly: it’s build your own edtech, and I think it has huge potential.
Last year, when Claude 3 was first released, I tried building my own app. I have very limited coding experience, as I discuss in the article, but I was able to build a passable app in a weekend. Already, the capabilities of Claude, ChatGPT, and other GenAI models has improved, and this same activity would take just a few hours.
Educator Scott Letts has taken this even further, with his grade six students using GenAI to build amazing applications like a game designed to help young people learn sign language. Check out his post about it here:
Features, features, features
Although ChatGPT started out as a minimalist app (text-in-text-out), the last few years gave seen all of the major AI developers releasing feature after feature. This has contributed to a lot of the confusion around what GenAI can and can’t do, particularly with OpenAI’s awful naming conventions (GPT-3.5, 4, 4v, 4o, o1-preview, 4o-with-canvas, 4o-mini…) and Google’s slew of miniaturised applications which can be found all over the place, from the “AI Test Kitchen” to the Labs and the Studio.
Anthropic’s Claude followed a similar trajectory, first adding image recognition capabilities, then code generation and the creation of “artifacts”, and finally features for selecting from the various models (3.5 Sonnet, 3.5 Haiku, 3 Opus) and changing the writing style of responses.

Microsoft has also been guilty of AI-feature-overload, first releasing Bing Chat and Bing Image Creator, then updating to Microsoft Copilot, and eventually releasing Copilot Pro, Microsoft 365 Copilot, and Copilot Studio.
It’s impossible to keep track of these various feature and app updates, but here’s at least a partial list from the major developers:
- GPT 4o is the new “default” for free and paid users. It now includes previously separate features including Canvas for inline editing of text and code, internet search, image recognition, code interpreter (aka “Advanced Data Analysis”), “Projects” for managing chats and uploaded files, and image generation with DALL-E 3.
- Advanced Voice Mode has been rolled into the ChatGPT app (now just called “voice mode”).
- Files can be uploaded to ChatGPT either via direct upload or Google Drive integration.
- ChatGPT’s “memory” feature stores information between chats, giving the impression of “learning” about a user. It stores information automatically or when explicitly instructed to do so.
- GPT o1 is the “advanced reasoning” model
- GPT 4o with scheduled tasks is a beta product that sends notifications like a todo list app
- GPTs, confusingly, are the name given to OpenAI’s custom chatbots, which allow you to add files and instructions and release chatbots for public or private access.
- Some GPT Plus and Pro users are starting to get access to Sora, OpenAI’s video generator.
- Google Gemini 1.5 has various versions, including Pro for complex tasks, Flash for everyday tasks, and Pro with Deep Research which compiles a Google search-based research document with references. The 1.5 models have internet access and use Google’s Imagen 3 model to generate images.
- Google Gemini 2.0 is the latest model from Google, and is designed to be faster and more capable at using external tools (other apps) for designing “AI Agents”, which I talk about at the end of the article.
- Google has various applications in their Lab, including the image generator playgrounds Whisk and ImageFX, VideoFX (using the Veo 2 video generator), and MusicFX. Some of these are only available in the US or via waitlist.
- Google’s NotebookLM has been released as a full product and uses the Gemini model to assist with note taking and research. In late 2024 the application went viral for its “podcast generator” feature.
- Microsoft Copilot is available via the website, and is built into the sidebar of the Edge browser by default. In early 2025, Microsoft announced it would increased the price of a Microsoft 365 subscription in several countries, including Australia, and add Copilot to the MS 365 applications (Word, PowerPoint, Excel, Outlook, etc.). Copilot uses OpenAI’s GPT and DALL-E models, and the Bing search engine.
- Copilot Studio is similar to OpenAI’s “GPTs” for creating custom chatbots, and this can also be achieved in Microsoft Azure.
- Claude features image recognition, app creation through “artifacts”, upload via files or Google Drive, and a “style” feature which allows you to select from an existing style or create one of your own.
- Claude Projects allow for the uploading of many files and the creation of a custom chatbot with instructions, similar to an OpenAI GPT.
- Meta (Facebook, Instagram, WhatsApp) has released the Meta AI chatbot across all of its products, including image generation. It is available in these various apps through direct messages, by typing @meta into search, or in the comments of certain posts.
There are of course many more developers, and many more features. If you’re overwhelmed, my advice is to pick one developer that you’re familiar with or which is used in your school, and focus on what it has to offer.

What Can GenAI Actually Do?
With all of those updates and advances in mind, the important question is: what can GenAI actually do? This is not about hype and hyperbole. I’m not going to tell you that chatbots will revolutionise education (they won’t) or that AI democratises creativity (it doesn’t). But if it has been a while since you tried ChatGPT and co., GenAI is probably more capable than you think, in areas far more diverse.
How Are Educators Using GenAI?
One huge problem with the jagged adoption of AI in education has been the number of people – tech companies, government, CEOs of education “nonprofits” – telling us how educators should or could use AI, and much less understanding of how educators are using GenAI in their day-to-day work. There is very little research, for example, in the experiences of K-12 teachers using GenAI for planning, teaching, and assessment. There are many factors to this: research focuses much more on tertiary level, it can be more difficult to conduct research in the K-12 setting, and there are potentially more systemic barriers holding K-12 back from using AI. Of course, many educators are also resisting the use of AI for perfectly valid reasons, which I’ll discuss later.
But there are plenty of examples of K-12 educators using GenAI if you look outside of traditional academic research. You’ll find them on social media, in Facebook groups, on LinkedIn and Bluesky, and in other corners of the internet. Here are a few recent examples I’ve spotted beyond the obvious “generating lesson plan” type activities:
- Transcribing the “lecture” part of a lesson to quickly create resources
- Converting documents into slideshows
- Generating images as creative writing discussion prompts
- Creating simple apps for demonstrating scientific and mathematical concepts
- Setting up custom chatbots for self-assessment and feedback
- Creating Q&A chatbots based on curriculum documents and course outlines
- Generating audio soundscapes to accompany student-made videos
- Projects with students designing and building their own software
- Using applications like Google’s NotebookLM in AI-assisted research tasks

Challenges and Barriers
Of course, with any new technology there are significant barriers to adoption. Given the ethical complexities of AI, it may also be a technology worth deliberately resisting, but I’ll talk more about that later.
The Expertise Problem
GenAI is, at first, incredibly easy to use. You put text in, and you get text out. But it becomes apparent very quickly that to use AI well you actually have to have a lot of expertise: both in the technology, and the subject area you’re using it in.
First of all, you need to be able to spot hallucinations and errors. These can range from fairly innocuous mistakes (ChatGPT hallucinates an incorrect date or makes up a URL) to slanderous errors (ChatGPT “accuses a radio broadcaster of committing fraud“). In education, you have to know the discipline well enough to spot errors, making it paradoxical for learners who by definition don’t know what they haven’t yet learned.
There is also no instruction manual for AI. Even the developers are often unsure of what these systems can and can’t do, and surprised by new capabilities. This is called the “discoverability problem” of AI: it is difficult to discover features and functions, unlike traditional software which comes with a handy toolbar, file menu, or help desk.
I wrote much more about this issue with AI and “expertise” in this article:
The Technological Divide
Access to technology has always been skewed for various reasons: economic, geographic, and political. In my part of the world, regional Victoria, the extensive periods of COVID remote learning highlighted just how many of our students had limited internet access – and this is in a reasonably wealthy agricultural community. Schemes like one laptop per child have notoriously failed, and UNESCO has reported extensively on the issues with edtech.
Generative AI represents yet another challenge for equitable technology access. Although companies like OpenAI have released free versions of their products, they of course come with limits and restrictions that make them much less attractive than the pro versions. This creates an issue in education when some students have access to better versions of the technology through their parents accounts, or their own.
This divide became readily apparent as soon as OpenAI released the paid version of GPT-4 to $20 USD/month subscribers. From that point on, it only got worse. Simon Willison, a software developer and technology writer, is worth quoting in full here:
For a few short months this year all three of the best available models—GPT-4o, Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely available to most of the world.
OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely available from its launch in June. This was a momentus change, because for the previous year free users had mostly been restricted to GPT-3.5 level models, meaning new users got a very inaccurate mental model of what a capable LLM could actually do.
That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT Pro. This $200/month subscription service is the only way to access their most capable model, o1 Pro.
Since the trick behind the o1 series (and the future models it will undoubtedly inspire) is to expend more compute time to get better results, I don’t think those days of free access to the best available models are likely to return.
Things we learned about LLMs in 2024 – Simon Willison
Slop and Digital Plastic
“Slop” is the term currently used by many to describe AI generated content with little to no human input. It was popularised last year by articles like this one in the Guardian, and authors like Simon Willison. Slop even has its own meme mascot: Shrimp Jesus.
I’ve been writing about “Digital Plastic” for the past couple of years. It’s a metaphor I’ve used to describe how GenAI can be both beneficial, but also very harmful to the online ecosystem. We can use it for accessibility, productivity, and entertainment, but if left unchecked it will likely overwhelm quality human-generated content online.
Resistance and Refusal
Towards the end of last year, the sentiment towards GenAI in academic communities took a noticeable shift towards criticism and even refusal. For my part, I wrote several articles about my position on the fence. But GenAI has become an increasingly contentious topic in schools and universities, and the questions I am asked during professional learning sessions have become more and more pointed.
Often, these questions are along the lines of: “How can you justify using AI when you know about the environmental costs?” or “Isn’t all of this technology built on stolen data?” Well. Fair enough. As I discussed above, these are very real concerns, and absolutely valid reasons for refusing the technology. On the other hand, as I also discussed above, global AI energy consumption is a fraction of the total caused by digital technologies and devices, and copyright infringement is nothing new.
I’ve explained my position in more detail in this article:
For those educators wishing to push back against AI and refuse its use in the classroom, I would only ask that you make sure your opinions are based on personal experience with the technology, and not secondhand information. Ignore shock and awe headlines about ChatGPT’s water consumption (often misquoted or inaccurate). Search for resources like these, written by and for educators who are taking a stand against the technology: Refusing GenAI in Writing Studies: A Quickstart Guide by Jennifer Sano-Franchini, Megan McIntyre & Maggie Fernandes, and Burn It Down: A License for AI Resistance. by Melanie Dusseau.
If you’re an English teacher concerned about the use of ChatGPT in writing, but still willing to experiment with the technology, then check out my resources from the 2024 state English teachers’ conference:
Just last week, educator Marc Watkins wrote this excellent piece:

Responding to GenAI
Every school is at a different stage with Generative AI: some have been experimenting since 2022, while others have barely scratched the surface. Some schools have created custom chatbots and internal AI systems, whereas others have deliberately and consciously opted out of using GenAI for both staff and students. It is not possible to ban AI – it is already far too ubiquitous – but that whether you choose to use or refuse AI you need strong institutional policies that provide guidance for staff, students, and the community.
I worked with the Victorian ICT Network for Education (VINE) in 2023 to produce Australia’s first open access policy, designed as a template for schools to take and contextualise. It has since been used by hundreds of schools here and overseas, and is still freely available and a good starting point for any school that has not yet started the process. You’ll find the VINE guidelines here.
Alongside the VINE guidelines, I have also worked on the AI Assessment Scale. Now in its second version, the AIAS provides a framework for exploring AI use with students at different levels depending on the content being assessed. The AIAS began as a “traffic light” model which was widely adopted by K-12 and Universities globally, and has featured in both TEQSA advice in Australia, and UNESCO’s Digital Week in Paris. The current version of the AIAS and its many translations can be found here:
The (Near) Future of GenAI in Education
Last of all, it’s worth turning our attention to the near future of generative AI. I don’t have a crystal ball, but I can take a guess at the trajectory of some of these technologies. For example, the convergence of augmented reality, wearable technology, and AI is almost certain: Meta’s collaboration with Ray Ban, which puts Meta AI into a camera and microphone on your face, has already taken off, with the smart glasses becoming Ray Ban’s best selling product.
AI Agents are still “more hype than reality“, but they are also a part of the near future narrative of AI – even if only as a marketing term from AI developers. So-called “agents” can interact with other software and act independently of the user, carrying out tasks fully autonomously.
I wrote about all of my predictions for the near future in these two articles at the end of last year:
That’s it for now! If you’ve made it this far, consider yourself about as up-to-speed as you need to be for 2025. Don’t expect that the arms race will slow down at all, but don’t get overwhelmed either. You will hear increasingly loud and divisive rhetoric from both the adopters and the refusers: make your own mind up, experiment, and learn.
This post and others on my site form the backbone of my online and face-to-face professional learning sessions and the on demand courses at practicalaistrategies.com.
To get in touch about any GenAI related professional learning and advisory services, please use the form below.

Leave a Reply