If 2023 was the year Generative AI dropped on us from a great height, then 2024 looks like the year for the tech to grow out of its “move fast and break things” phase. In education, it’s time to have some serious conversations about where we go from here.
In 2025, I wrote a similar article to this one. be sure to check out the 2025 article here:
As I trawled through blog posts while drafting Practical AI Strategies, I lost count of the amount of times I had used variations on the theme of “when OpenAI launched ChatGPT in November 2022…”. I think the point I was trying to make – again, and again, and again – was that the release of OpenAI’s chatbot marked a paradigm shift in the way we understand and interact with GenAI. In education, that meant coming to terms with shifting concepts of academic honesty and the creation or demonstration of knowledge.
The technology has already reached a stage where it is ubiquitous. It is impossible to browse online or use social media without encountering AI generated content, synthetic media, and the tell-tale signs of text written by ChatGPT (anything with the word “delve” or the “ever evolving landscape” is a major red flag). Not content with overrunning the digital world with GenAI content, Microsoft has just announced a physical button to conjure up Copilot – their next iteration of the GPT-based Bing Chat. Despite the obvious marketing ploy, the sidestep from the digital into the physical further confirms that this technology isn’t going anywhere.
Late last year I was working with schools and universities who had barely scratched the surface of GenAI. It’s still true that most organisations and individuals haven’t had the time or the inclination to keep up with developments in the last twelve months. But with the increased accessibility of the technologies and the release of guidance such as the Australian Framework for Generative AI in Schools, now is the time to start planning and setting a few things in motion for 2024.
Where to begin?
If your school or university was one of the many waiting for advice from a “higher authority” before acting on GenAI, well, now you have it. In K-12, the Framework provides sparse advice but at least a clear message: generative AI is here, and we need to do something about it. In tertiary, most universities have accepted that they cannot police AI. The Tertiary Education Quality and Standards Agency (TEQSA) have provided resources to understand the benefits and challenges of the technology.
I’d suggest that you kick off the academic year with a discussion at leadership level of the key aspects of these documents, and in particular Teaching and Learning, Assessment, Generative AI Ethics, and Privacy. Familiarise yourselves with the Department of Education and TEQSA resources, and then look to other state or national resources. For K-12 educators, many of the state senior certificate bodies have also produced or curated resources. QCAA in Queensland and BSSS in ACT in particular have created a large bank of materials on teaching & learning and assessment.
Which tools are out there?
Next, you’ll need to undertake a quick scan of the applications and services that are available for both staff and students. I say a quick scan, since you can waste a tremendous amount of time looking into the 1000s of apps marketed as AI tools. Stick to a handful of sources and preferably within your school’s existing ecosystem. While this isn’t intended to reinforce the dominance of the major developers, like Microsoft and Google, it makes sense to encourage staff and students to use technology from providers they are familiar with, and these larger companies have a much greater (commercial) imperative to get safety and privacy right.
I would still recommend sticking to text and image generation, and leaving audio, video, and code generation to specialist educators and interested individuals. I think that fully multimodal GenAI is going to explode in 2024, and that we’ll see a huge increase in the connections between AI and technologies like virtual and augmented reality. But we don’t need to rush in, and we shouldn’t expect teachers to suddenly be experts in every bourgeoning field of GenAI.
Stick to some of the following:
- ChatGPT (free) for text generation. The free version uses GPT-3.5, which is significantly less powerful than the paid version (GPT-4). Most of the time, it gets the job done if you know how to use it.
- Microsoft Copilot for text and image generation. Copilot replaces Bing Chat and Bing Image Creator, and will ultimately be integrated into every Windows device and most Microsoft applications (such as Office). It uses GPT-3.5 by default, but can use GPT-4. It can also use GPT-4v, OpenAI’s image recognition model, and DALL-E 3 for image generation. Copilot also uses Microsoft Bing for internet search.
- ChatGPT Plus (paid) for advanced and frequent users. I use ChatGPT Plus, though with Microsoft’s constant updates to Copilot and inclusion of GPT-4 I often question that $20 USD per month. ChatGPT Plus currently outperforms all other models in most benchmarks. I use it to write code, edit my website, and create images using the same DALL-E 3 model.
- Perplexity for search and research. Perplexity is an interesting one to watch. They have just received a large investment from Amazon, and already outperform ChatGPT, Copilot, and Bard in providing accurate responses to search requests. Basically, Perplexity “hallucinates” or fabricates information less than other models. I would recommend this for educators and for more senior students in upper secondary and tertiary.
- Adobe Firefly (and Photoshop) for image generation. Adobe has built an arguably more ethical image generation model with Firefly. Whether or not you agree with using Adobe Stock images to build a model – and many artists don’t – it’s very attractive in an education context as it already has a K-12 license for the individual product, Firefly, and for the Generative AI tools incorporated into popular products like Photoshop, such as Generative Fill and Generative Expand. Basically, if your school or organisation has an Adobe CC license, then your students and staff already have access.
- Google Bard. I include this even though I never use it, simply because they’re obviously a major player. At the time of writing, Bard is generally hopeless in comparison to other models. But it would be ridiculous to assume that this will always be the case. Google’s Gemini model had some teething issues on first release due to hyped up marketing, but it’s likely that a much improved chatbot will be available soon with just as many features as Copilot.
If you’re in a particular ecosystem (e.g., you’re a “Microsoft school”), it’s likely your ICT admin staff can turn on features like Copilot across all staff devices. This will enable Copilot to be used from the Windows task bar. As I said earlier – these technologies are almost ubiquitous already.

Join the mailing list to stay updated about new articles and resources:
What are the major ethical concerns?
In 2024, I expect we will see an increase in public awareness of the ethical issues of generative AI. We have already experienced “techlash” against social media and data collection in recent years, and the pushback against generative AI will be much swifter and more aggressive. This won’t be a bad thing: it took over a decade before we realised the true extent of the privacy breaches and misuse of personal data that are the hallmark of digital technologies.
In education, we tread a fine line between using technology appropriately and being so high up on the ethical high horse that we can’t see the ground. It is not up to us to be the arbiters of how students use technologies: once they leave the classroom or lecture hall, they’ll do what they want anyway. But it is our responsibility to educate students about how these products work, and that includes the major ethical issues.
I have written previously about the major ethical concerns of generative AI, and devoted a section of my upcoming book to issues like bias, human labour, and data surveillance. But as we kick off 2024, there are some issues particularly prevalent in the media which would make for good discussions with students.
First of all, the increasing push back against GenAI developers over copyright and intellectual property concerns. The New York Times has sued OpenAI and Microsoft for copyright infringement, alleging the misuse of articles in training datasets. The lawsuit argues that the developers are liable for “billions of dollars in statutory and actual damages”. In response, Tom Rubin of OpenAI told the Washington Post that NYT had violated the Terms and Conditions with their prompting method – a typical “blame the user” response.
Over in the world of image generation, Midjourney released version 6 of their product. The increase in quality is notable, with highly detailed photorealistic images created from very minimal prompting.

Unfortunately, the increase in quality doesn’t offset one of Midjourney’s most troubling features: the ability to create images that are practically identical to images from the dataset. Technically this is called overfitting and shouldn’t happen, but it is entirely possible to recreate copyrighted images using the platform. Midjourney’s response? Change the terms and conditions, ban the users blowing the whistle, and threaten legal action. Gary Marcus explored the story in this article.
As if that wasn’t bad enough, Jon Lam – senior storyboard artist at Riot Games – is sharing a list of over 16,000 artists who have been deliberately labelled in the dataset, alongside a Discord conversation of MJ developers detailing the process. The list has been added to the lawsuit against Stable Diffusion, Midjourney, and others.
2024 will possibly be a “coming-of-age” year for Generative AI technologies. Public and legal pressure might result in changes to legislation and regulation that ultimately impact education and the technologies we can use in the classroom.
How do we update our policies and guidelines?
After looking at the Framework and other education resources, exploring accessible apps and services, and considering the ethical implications, you will need to sit down and look at your policies and guidelines. I work with many schools and organisations to audit existing policies and most places have cybersafety, digital user agreements, and the like which can be updated to accomodate these technologies.
But there are areas which will need more specific attention. Issues with GenAI deepfakes will require serious consideration and updates to cyber bullying and digital consent materials. We have already had reports in Australia and worldwide of GenAI being used to create explicit and non-consensual images. It is important to know how to handle these issues, where to find support materials for staff and students, and how to report the abuse to the relevant authorities.
The eSafety Commissioner provides clear advice, and these processes should be incorporated into existing school policies where necessary.
Use the Framework for guidance in K-12, and at a tertiary level consider the implications of the technology beyond just assessment and academic integrity. I wrote a lot about policy and guidelines in 2023 and have collected the articles here. There’s also a section in Practical AI Strategies on creating and updating school policy.

Where can we find resources and training?
I spent a lot of 2023 writing about the different areas of Generative AI, including policy, ethics, practical advice, and more in depth articles on how Generative AI works. I created various collections of articles to make them easier to navigate, and I’ll continue to update the collections as I write more articles in 2024:
There are many resources available to help support educators and students using Generative AI. The Australian Curriculum has been updated to include a Curriculum Connection for Artificial Intelligence, and as I mentioned earlier the various state certification bodies including QCAA, SACE, NSW DET, and BSSS have produced resources.
How can we handle assessments?
Once you have discussed the basics of AI apps, ethics, and your organisational policies and guidelines, it will be necessary to provide staff and students with clear advice about assessment.
My advice has been the same since ChatGPT was released: we cannot police, block, ban, or detect GenAI materials, so we need to change our assessment practices accordingly.
The AI Assessment Scale (AIAS) was developed to provide a way to talk about AI in various tasks, outlining where AI can and can’t be used in various contexts. Rather than a black-or-white approach to using GenAI, it allows educators and students some discretion over the technology. At the end of last year, Dr Mike Perkins, Dr Jasper Roe, Dr Jason MacVaugh and I updated the earlier version of the AIAS to make it flexible enough to be used across disciplines and from K-tertiary. You can read about the AIAS through the open access preprint on arXiv or check out the blog post:
Conclusion
Whatever stage you are at in your planning and implementation of Generative AI, I would recommend spending some time at the start of 2024 laying down a few guidelines for your staff, students, and the community. Understanding the ethical and practical implications of the technology will be incredibly important in the coming months. In a couple of weeks, students will enter classrooms, lecture halls, and online spaces with easy access to hundreds of Generative AI tools. The technology – ethically problematic as it is – can’t be avoided. There are also huge creative potentials which I’ll be writing more about this year as I explore multimodal GenAI. Ultimately, 2023 might have caught us off guard, but in 2024 we need to be much more proactive.
If you’d like to discuss consulting services, advisory work, or professional development for Generative AI, please get in touch using the form below:

Leave a Reply