UNESCO Guidelines for Generative AI: Practical Applications for Schools

Following Digital Learning Week in Paris, the United Nations Educational, Scientific and Cultural Organization published a comprehensive document containing guidelines for Generative AI (GAI) in education and research, and exploring the implications for policy.

The guidelines are linked to the UN Sustainable Development Goals through Goal 4: ensure inclusive and equitable quality education and promote lifelong learning opportunities for all, and reflect the “Education 2030” agenda of the organisation. As you might expect, there is a focus throughout on human agency, inclusion, equity, and diversity. There is also useful advice on the practical and creative implications of GAI in education which can be applied to the K-12 context.

With the focus on policy and regulations, some have criticised UNESCO’s approach to GAI as too conservative, or even as “virtue signalling” or moralising. Personally, I see the guidelines as a useful addition to our approaches to Generative AI in education but, like all frameworks and guidelines, think that they need to be contextualised and made as practical as possible if they are to be of any use in the classroom.

The Guidelines at a glance

UNESCO’s 48 page document covers both the challenges and the opportunities of GAI for education and research, as well as some broad definitions. The document is structured as follows:

  1. What is generative AI and how does it work?
  2. Controversies around GAI and their implications for education
  3. Regulating the use of GAI in education
  4. Towards a policy framework for the use of GAI in education and research
  5. Facilitating the creative use of GAI in education and research
  6. GAI and the future of education and research

In this post, I’m going to explore some of the key aspects of these areas and how they can be applied practically in a K-12 context, whether through school policy, guidelines, or practical application in the classroom and through teacher PD. In the conclusion, I’ll outline my key practical takeaways from the guidelines.

What is GAI?

The guidelines offer a fairly comprehensive overview of Generative AI, focusing on text and image generation, but also touching on video and music. There are some technical details, and lists of popular (as of September 2023) applications and services such as OpenAI’s ChatGPT, Google’s Bard, Perplexity, DALL-E, Craiyon, and Midjourney.

For teachers looking to cut through the “thousands of AI apps” noise, the apps listed in the guidelines provide a good starting point without going too far. As I’ve discussed in other posts, I recommend getting to grips with just one or two GAI applications such as ChatGPT and DALL-E before diving in to others. In many cases, current GAI applications are just interfaces built on top of foundation models like GPT, and therefore do not require different skills.

There is some guidance on prompting, which may be useful to anyone who hasn’t yet had the time to explore these technologies. In my opinion, the best way to learn how to prompt GAI is to get in there and try it out – you’ll soon see for yourself the strengths and limitations of different approaches.

I think the most important aspect in this section for K-12 educators is the acknowledgement that GAI spans across different modes: text, visual, audio, video. I’ve written previously about state policies which treat ChatGPT as the only form of GAI, and in my experience many schools are yet to explore beyond text generation. It is important to recognise that students will be able to use the full multimodal suite of Generative AI tools and that, although some like video are currently not great, the pace of development will continue to accelerate.

crop schoolchild using mouse of laptop during computer class
Photo by Ryutaro Tsukata on Pexels.com

Controversies surrounding GAI

I’ve written a lot about the ethical concerns of artificial intelligence as a technology and an industry, including the inherent bias in large language models, the environmental impact of the technologies, the implications for academic integrity, copyright, and the threats to privacy and security.

Part two of the UNESCO guidelines focuses on some of these key areas. The major “controversies” in the guidelines include the generation of deepfakes, the “pollution” of the internet with masses of GAI content, and the potential to worsen the global digital divide.

In practical terms, we can educate our students about these concerns and advocate for more transparency and accountability as these systems enter our schools. The ethical concerns with GAI extend beyond the reach of most individual educators, and can seem too daunting to approach. However, most jurisdictions already have areas of the curriculum designed to approach ethical discussions, and these can be updated to include conversations with students about GAI.

Schools also have the autonomy to say no to technologies which lack transparency, or which have an over reliance on student personal data. Over the next few years, we will certainly see an increase in GAI-powered edtech which relies on the datafication of students in the interest of personalised learning. Schools should be cautious which platforms and services they sign up to, and should of course ensure informed consent from students, parents, and caregivers.

Regulating GAI in education

Like the bigger-picture ethical concerns on GAI, regulation of the industry is largely out of the hands of individual teachers and schools. The guidelines provide suggestions for government regulation of GAI services and providers, and also advice for institutions and individuals as users of the technology.

From a practical perspective, the biggest implications of regulation in the classroom are ensuring that both you and your students understand your rights and responsibilities.

Teachers can, for example, help students to navigate the (often deliberately) complex terms and conditions of GAI applications and services to understand how their data is collected, stored, used, and shared. Honestly, no one reads terms and conditions – and that’s exactly how we got ourselves into such a mess with social media. But this doesn’t have to be a boring exercise in reading small print. Group annotation tools, threads pointing out the absurdity of some T&Cs, and getting students to write their own “ethical” T&Cs can lighten the discussion.

In Australia, the eSafety Commissioner has recently released a position statement on GAI which includes suggestions to industry for a “safety by design” approach. It might not seem like the most flashy or amazing use of GAI in the classroom, but it is important to discuss with students. The eSafety Commissioner guidance, for example, offers advice on what to do if you are a victim of GAI related cyberbullying or abuse, such as explicit deepfakes.

Regulation has been slow to catch up with the developments in GAI in the past 18 months, but it is catching up. Educators will of course need to make sure that any use of the technology in the classroom is in line with local laws and regulations.

GAI policy in education

The approach to policy in the UNESCO guidelines matches what we’ve seen here in Australia through the draft national framework for GAI in schools. The policy advice focuses on: inclusion, equity, and diversity; human agency; monitoring and evaluation; the development of AI competencies for learners; capacity building for teachers; plurality of expressions and perspectives; local models; and long term implications.

Again, some of this is beyond the scope of individual schools and teachers. However, the “AI competencies” will be interesting to keep an eye out for, and UNESCO plans to release their own draft competencies in 2024.

Schools should also be considering how they will provision teacher professional development. Teachers will need support in using the tools, understanding how students might use the tools, academic integrity and assessment practices, and GAI ethics and appropriate use. Plenty of this PD. is already out there, and it is a case of curating and finding appropriate resources. In some cases, curriculum bodies like ACARA will provide explicit curriculum links to discuss GAI in the classroom.

Personally, one of the reasons I think schools benefit from a clear GAI policy or guidelines is that they make transparent the expectations for staff PD: if. a school commits to implementing, monitoring, and evaluating GAI then they must also commit to ensuring sufficient training for the teachers and staff.

two people working on laptops
Photo by Christina Morillo on Pexels.com

Creative use of GAI in education

As I mentioned earlier, there have been some criticisms of the UNESCO guidelines due to the focus on policy, regulation, and the ethical concerns of GAI. Section 5 of the guidelines, however, provide some examples of “potential but unproven uses” of the technology.

We’re already seeing a lot of hype over the “transformational potential” of GAI, or the ways in which it will “revolutionise education”. Given the frequent disappointments in the past of edtech, and the ever-growing digital divide in Australia and globally, I prefer UNESCO’s slightly more conservative approach.

There are, undoubtedly, creative applications of these technologies in education. I personally use text, image, and audio generation as part of my consulting business and my writing, and I’m sure that I’ll use video generation when the technology improves. I’ve also used GAI features like ChatGPT’s “Advanced Data Analysis” (formerly code interpreter) to create programs in python and HTML that I would struggle to make myself.

The UNESCO guidelines suggest “human centred and pedagogically appropriate interactions” as an approach to using GAI: essentially, make sure that GAI doesn’t replace human critical and creative thinking, and don’t use it just for the sake of it. I think that’s sound advice. We don’t need to see any more uncanny-valley “historical talking heads” with AI voiceovers. Instead, we should be looking for occasions where GAI supports and augments useful teaching strategies.

In terms of personalised learning, schools should be cautious of GAI. We know that these systems present a normalised, culturally US-centric, and non-diverse worldview. Technologies offering personalised learning made inadvertently further marginalise some students. As an example, I’m yet to find any large language model which can accurately represent an autistic worldview beyond outdated and stereotypical paradigms which centre on the pathologising of autistic traits, such as “fixing” eye-contact and other social skills. Contemporary research based on lived-experienced needs to be worked into any GAI personalisation services before they can be used effectively.

It’s worth digging in to pages 30-35 of the guidelines for some examples of potential uses, as well as some of the caveats and risks associated with them. These include:

  • GAI as a research advisor
  • GAI literature reviewer
  • Curriculum co-designer
  • Chatbot teaching assistant
  • 1:1 GAI coach or tutor
  • “Socratic challenger”
  • Project-based learning advisor
  • Learning difficulty diagnostic/accessibility tool

GAI and the future of education and research

The guidelines end with a few more notes of caution, some suggestions for assessment practices, and the implications for critical and creative thinking.

The main practical takeaway from this section is the need to update assessment practices. Generative AI content cannot be detected. While some companies, such as Google, are working on watermarks for generated content, not all developers will include such safeguards and the technologies are not yet commercially available. Relying on detection and trying to “catch” students also creates two particularly problematic issues.

First, detection creates an equity issue. Some students will be more digitally literate than others. Some will have better access to technology at home. Some will have access to better GAI models such as paid subscriptions. They might use GAI in ways that are undetectable. Others might use the technology in ways that are much more obvious, or use free versions that are less capable. The first student will “cheat” and get away with it. The second will get caught and punished.

Detection also creates a teacher workload issue. The need to “catch” and punish students for using GAI is time-intensive, requires a lot of manual identification of possible GAI content, and is prone to inaccuracy and may lead to false accusations. Teachers would have to spend a lot of time in conversation with students and parents, and some of those conversations carry a significant “my word against yours” problem.

Rather than relying on detection tools or archaic pen-and-paper examinations, schools and teachers should explore ways to incorporate GAI appropriately, to discuss academic integrity with students, and to create engaging tasks where students are less inclined to want to “cheat” in the first place.


The UNESCO guidelines are not intended to be a hard and fast set of rules for educators to follow. While there are some concrete recommendations for policy-makers and teachers, the usefulness of the document comes down to how it is practically applied. To close this blog post, these are my main suggestions for what schools could do right now to draw on the guidelines and put them into action:

  1. Educate teachers and staff: Organise workshops and training sessions to familiarise teachers and school staff with GAI, its benefits, and ethical considerations.
  2. Review existing curriculum: Identify areas within the existing curriculum where discussions about the ethical implications of GAI can be integrated.
  3. Student awareness: Run assemblies or classes to educate students on what GAI is, how it works, and the ethical considerations involved.
  4. Policy review and development: Schools should review their existing policies and identify areas which may need updating. If needed, develop a new GAI policy.
  5. Informed consent: Send out communications to parents and caregivers to inform them about the use of GAI in education and obtain their consent.
  6. Regular monitoring and feedback: Establish mechanisms to regularly monitor the use of GAI in education and gather feedback from everyone in the community.
  7. Community engagement: Engage with parents, caregivers, and the wider community to get their perspectives on GAI. If exploring GAI for personalisation, engage people with diverse perspectives.
  8. Academic integrity and assessment: Develop guidelines and practices that discourage the unethical use of GAI in academic settings, such as generating essays or completing assignments. Revise assessment approaches.

For more posts like this, join the mailing list:

Success! You're on the list.

Leave a Reply

%d bloggers like this: