Australian Framework for Generative AI in Schools: A good start, but much more to be done

The final version of the Australian Framework for Generative AI in Schools has been published, offering six core principles and twenty-five guiding statements for schools seeking to use generative AI. But will it help?

Since February 2023, Australian Education Ministers have been working on a Framework for GenAI in consultation with unions, teachers, students, industry, academics, sector organisations and school communities to determine a national approach to the use of generative AI in schools.

The Framework has undergone several rounds of consultation, including a period of public consultation convened in July-August by the New South Wales Department of Education. A Parliamentary Inquiry was also conducted by The House Standing Committee on Employment, Education and Training, culminating in a series of public hearings in October and November. I’ll write a longer post about these hearings once the transcripts are fully released.

I’ve written about the consultation draft in another post, and the final version hasn’t changed a great deal. I also use the draft Framework, along with UNESCO Guidelines and my own research, to produce the VINE guidelines which have been made open access.

What’s new in the final Framework?

The Framework addresses six “core principles” of generative AI in education: Teaching and Learning; Human and Social Wellbeing; Transparency; Fairness, Accountability; and Privacy, Security and Safety. These have remained consistent since early consultation drafts, though the guiding statements have changed in some areas including some movement of guiding statements between principles.

Most notably, the statements under Teaching and Learning have undergone some significant updates. There is now an acknowledgement of the importance of teacher expertise (1.3), stating that “teachers are recognised and respected as the subject matter experts within the classroom.” The draft guiding statement for human cognition has also been replaced by critical thinking (1.4) and learning design (1.5) has been added to state that “work design for students…clearly outlines how generative AI tools should or should not be used”.

I’m particularly pleased to see the addition of teacher expertise, given some of the potential of these technologies to be used in ways which potentially deskill teachers. The use of generative AI to create stock lesson plans, resources, or units of work as a way to reduce teacher workload would in many ways counteract this guiding statement. Rather than generative AI being used to create lesson materials in bulk, I’d much prefer to see educators provided the skills and knowledge to use the technologies to celebrate and support their own expertise.

However, as we wrote in this recent article for The Conversation, there needs to be an acknowledgement that teachers’ skills extend far beyond subject expertise.

Too broad, too basic?

As a high-level document, its unsurprising that some of the guiding statements are broad and open to interpretation. Unfortunately, some areas are so broad that they might be difficult to implement in schools. For example, core principle three, Transparency, suggests models should be explainable and that end users – schools,  students, and communities in this case – should “broadly understand the methods used by generative AI tools”. This is unlikely, given the longstanding issue that even generative AI developers don’t always know exactly how these models work.

A recent “transparency index” published by Stanford University’s Centre for Research on Foundation Models (CRFM) suggests that “no major model is close to providing adequate transparency,” with the most transparent model – Meta’s open source Llama 2 – scoring only 54 out of 100 on the index.

Even with talks of “sovereign AI” and custom models for use in Australian government organisations and potentially education, it will be difficult for schools to approach the core principle of transparency when they will have little control over the foundation models their applications and services are constructed on top of. These technologies, according to the Framework, are to be monitored, risks to be “managed”, and tools to be tested before use. I would ask: by whom?

As well as the transparency issues, generative AI is, like all forms of predictive, classifying machine learning technology, prone to bias and discrimination. This is addressed in the Framework, but there are assumptions that generative AI can be used in ways which are not harmful and which “expose users to diverse ideas and perspectives and avoid the reinforcement of biases.”

Image generation, including models such as DALL-E 3 from OpenAI which is already accessible in schools through Microsoft Bing Chat, are notoriously difficult to “de-bias”. ChatGPT and other applications built on Large Language Models also represent a homogenised, Westernised, white, male perspective due to the composition of the datasets. It is arguably impossible to use these models in ways which do not reinforce the biases of the training data.

The Framework also recommends schools use generative AI in ways that respect cultural and intellectual property rights. Again, given the problematic construction of generative AI models including copyright concerns over dataset contents and ongoing high-profile lawsuits, it’s unlikely that schools or teachers will have much control over the use of intellectual property. Indigenous data sovereignty is an equally contentious issue, and the appropriation and use of First Nations languages and cultural artefacts into training data continues to raise ethical questions and requires the direct input and collaboration of indigenous communities.

Finally, generative AI technologies are rife with safety and security issues. Just days prior to the publication of the framework, a research team at Google’s Deepmind used data extraction attacks to expose personal information from ChatGPT training data. There is no way that teachers or schools can guarantee safety and privacy when using generative AI. The Framework might also benefit from some more specific advice around the potential harms of deepfakes and the use of generative AI to create non-consensual explicit material, something which the Australian eSafety Commissioner has prioritised in their report into generative AI

Enjoying these articles? Join the mailing list for updates:

Processing…
Success! You're on the list.

A step in the right direction

The Australian Framework for Generative Artificial Intelligence in Schools represents an important step in supporting schools dealing with the implications of generative AI technologies. The focus on teaching and Learning, including the updates since the draft to focus on teacher expertise and critical thinking, are exactly where this document should focus.

However, there are complex and highly problematic ethical issues with generative AI – and Artificial Intelligence more broadly – which have not been fully accounted for in the Framework. Schools cannot be expected to deal with systemic issues of bias and marginalisation, a lack of transparency, complexities of intellectual and cultural property, and the many security and safety issues of generative AI. These problems must be addressed at a higher level, with government putting pressure on developers to do better.

There will also need to be significant resourcing of schools to support teachers, students, and the broader community in adopting the core principles of the Framework. In the last 12 months, a lot of “AI experts” have sprung up, including in education. The messaging around generative AI in teacher networks and on social media is often unclear, and work will need to be done to cut through the noise. Organisations best placed to support teachers including subject associations, sector leaders, and universities, should be encouraged to lead the way: the development and delivery of these resources should not be left to technology companies and organisations with other vested interests.

I do find it interesting that, although the Framework document states the work is “evidence-informed”, there is no evidence that generative AI will have a positive impact on education. In fact, despite input from many academics there is no reference to any research in the final document. I sincerely hope that the Framework hasn’t been stripped of research on the flawed assumption that secondary educators aren’t interested, or don’t have the inclination to engage in academic literature. If any of the Framework becomes policy, I’d expect to see much more rigour.

The Framework is set to be reviewed every 12 months to account for the rapid pace of development in these technologies. In the next twelve months, I hope to see serious thought put into how schools might tackle the bigger-picture ethical concerns of these technologies and prepare our students for the near-future. It must be recognised that the challenges posed by these technologies extend beyond the classroom. Addressing bias, transparency, and security concerns requires a concerted effort at the systemic level. To benefit from generative AI in education, there must be a comprehensive approach involving government regulation, technological innovation, and continuous support for schools.

If you’d like to talk about generative AI, AI guidelines, or professional learning, please get in touch below:

← Back

Thank you for your response. ✨

2 responses to “Australian Framework for Generative AI in Schools: A good start, but much more to be done”

  1. […] But with the increased accessibility of the technologies and the release of guidance such as the Australian Framework for Generative AI in Schools, now is the time to start planning and setting a few things in motion for […]

  2. […] have since been reflected in our recent articles for The Conversation and the AARE blog, plus this post on the new Australian Framework for Generative AI in […]

Leave a Reply to Comments at the Parliamentary Inquiry for Generative AI in Schools – Leon FurzeCancel reply