It’s been over two years since Australian Education Ministers agreed that generative AI was a national education priority and commissioned the Australian Framework for Generative AI in Schools. That framework – a 7-page document published in 2023 – was Australia’s first attempt to provide “nationally consistent guidance” on AI use in education. It established six core principles, offered 25 guiding statements, and attempted to create a shared language for schools grappling with ChatGPT and its exponentially evolving successors.
Now, in early 2025, we have its HE counterpart: the Australian Framework for Artificial Intelligence in Higher Education. Published by the Australian Centre for Student Equity and Success (ACSES) in collaboration with the National AI in Schools Taskforce, this new framework is more ambitious than its predecessor.
For those of us working in K-12 education, this raises some necessary questions. What have we learned about AI in education since 2023? What can we learn from the differences between the two documents? And what does the emergence of this new framework tell us about where AI policy in education is actually headed?
Understanding the Core Principles
The schools framework established six principles: Teaching and Learning, Human and Social Wellbeing, Transparency, Fairness, Accountability, and Privacy, Security and Safety. These principles were designed to be aspirational, guiding schools toward “safe, ethical and responsible use” without mandating specific implementations.
The higher education framework retains some of this structure but expands it to seven principles with a notably different emphasis:
- Human-centred education – recognising that “the primary value of higher education lies in human connection, critical dialogue, and the development of wisdom and expert professional judgement”
- Inclusive implementation – ensuring AI benefits all students, with explicit attention to equity-bearing groups and intersectionality
- Ethical decision-making through fairness, accountability, transparency, and contestability – what the schools framework split across multiple principles is here consolidated under the FATE framework
- Indigenous knowledges – a standalone principle (not just a guiding statement) addressing data sovereignty and two-way learning
- Ethical development and deployment – covering procurement, environmental sustainability, and the entire lifecycle of AI tools
- Fostering adaptive skills for AI integration – prioritising students’ ability to monitor and adapt their own learning over technical proficiency
- Evidence-informed innovation – balancing innovation with research rigour and evaluation
What’s immediately striking is the theoretical density. Where the schools framework offered brief explanations, the HE framework cites specific scholars (Bearman, Birhane, Crenshaw, Fraser, Kukutai, Yunkaporta), references established research traditions (critical intersectionality, Indigenous data sovereignty, design justice), and grounds its recommendations in pedagogy and equity scholarship. It’s academically rigorous and much more evidence-based: something we called for when the K-12 Framework was first released.
Equity is central
The schools framework mentions equity in several guiding statements (4.1, 4.2, 4.3) but doesn’t foreground it as a core organising principle. The HE framework has equity woven throughout. It explicitly names equity-bearing groups: people from socio-economically disadvantaged backgrounds, Aboriginal and Torres Strait Islander people, women in non-traditional areas of study, people from non-English speaking backgrounds, people with disabilities, and people from rural and isolated areas.
More than that, it adopts Nancy Fraser’s three-dimensional model of social justice – redistribution, recognition, and representation – and uses this as a lens for AI implementation. The message is much clearer: AI in higher education must actively address existing inequities, not exacerbate them.

Indigenous Knowledges as a standalone principle
In the schools framework, Indigenous considerations appear in Principle 4 (Fairness), specifically in guiding statement 4.4: “generative AI tools are used in ways that respect the cultural rights of various cultural groups, including Indigenous Cultural and Intellectual Property (ICIP) rights.”
The HE framework elevates this to Principle 4 – Indigenous knowledges – and dedicates substantial space to explaining why this matters. It references the CARE Principles for Indigenous Data Governance (Collective Benefit, Authority to Control, Responsibility, Ethics), discusses Tyson Yunkaporta‘s work on two-way learning, and acknowledges the “inherent tension between Indigenous epistemologies and ways of knowing and AI systems that have been developed and trained within a Western context.”
This recognises that AI isn’t culturally neutral, that the training data and design assumptions baked into these systems reflect particular worldviews, and that Indigenous communities have the right to maintain control over their knowledge and how it’s represented in AI systems.
Assessment and academic integrity
Interestingly, the HE framework explicitly states that it “does not substantially address academic integrity and the need for assessment reform, instead directing readers to Assessment Reform for the Age of Artificial Intelligence (Lodge et al., 2023a) and Enacting Assessment Reform in a Time of Artificial Intelligence (Lodge et al., 2025), published by TEQSA.”
Academic integrity is a massive issue in higher education – arguably even more so than in schools, where supervision and relationships provide some protection against wholesale AI misuse. By directing readers to the assessment reform guidance, the HE framework avoids duplicating work and acknowledges that assessment is a complex, evolving challenge that requires dedicated resources. It also signals a shift in attitudes since 2023: many people (myself included) feel that we have talked enough about academic integrity.

Adaptive skills > technical proficiency
Both frameworks recognise the need for AI capability, but they frame it differently. The schools framework talks about “instruction” (1.2) and ensuring students learn “about generative AI tools and how they work, including their potential limitations and biases.”
The HE framework also goes further with Principle 6: Fostering adaptive skills for AI integration. The focus is on developing students’ ability to “monitor, adapt, and take responsibility for their own learning processes” – what the framework calls “adaptive learning skills.” The rationale is clarified as follows:
“The rapid evolution of AI tools means that developing adaptive learning strategies will serve students better than a narrow focus on current technologies or skills such as ‘prompt engineering’ that have limited future utility.”
This critique of “AI literacy” programs that emphasise technical skills like prompt engineering is particularly relevant for schools. We’ve seen a proliferation of professional development sessions teaching teachers how to write better prompts, but the HE framework argues this is short-sighted. What students need are “foundational capacities for self-directed learning and critical thinking” – capabilities that transfer across technological contexts.
Implementation guidance
Perhaps the most significant difference is the inclusion of a dedicated Implementation Guidance section in the HE framework. This section (pages 10-13) provides concrete advice on:
- Governance structures (who should be involved, how to maintain dialogue with national bodies)
- Policy development (what to address, how to reference equity and Indigenous knowledge principles)
- Procurement and development (embedding “equity by design,” involving diverse groups in co-design)
- Professional learning (building capacity, addressing discipline-specific considerations)
- Pedagogical integration (alignment with graduate attributes, transparency in AI use)
- Research applications (maintaining integrity, respecting data sovereignty)
- Evaluation frameworks (what to measure, how to identify unintended consequences)
- Cross-institutional collaboration (shared resources, joint procurement, communities of practice)
The schools framework has none of this, and has been criticised for a lack of resourcing and practical support. It offers principles and guiding statements but no roadmap for implementation. This is where the absence of resourcing becomes most obvious, and the state-by-state variance in the delivery of the K-12 framework has been a huge source of frustration for school leaders.
Why wasn’t the schools framework properly resourced?
The schools framework was released in 2023 with no dedicated funding (though it was related to the National Teacher Workforce Action Plan and workload reduction), no implementation support, and no enforcement mechanism. It was aspirational but toothless – a document that asked schools to do important work without providing the resources to do it.
I’ve written about this before. The framework itself is solid, and came at a time when most countries were sitting on their hands. It provided permission for schools and sectors to experiment, and some guidance on how to do that appropriately. But it was published into a void. Schools were expected to develop AI policies, provide professional learning, update assessment practices, and navigate complex ethical questions without time, funding, or expert support. Many schools have done excellent work despite this, but it’s been ad hoc and uneven.
The HE framework, by contrast, has regulatory support as well as the benefit of an additional two and a half years’ evidence. TEQSA is a regulator with the power to enforce standards, conduct reviews, and hold institutions accountable. TEQSA has already published two substantial assessment reform guides that the HE framework explicitly references. Universities know that if they don’t align with the framework, there will be consequences during their next review.
Schools don’t have this. The schools framework is endorsed by Education Ministers, but it’s not enforceable. There’s no equivalent to TEQSA conducting audits, no accountability mechanism, no central or even state body ensuring that schools are actually following the guidance.
This is important because AI in schools is at least as relevant an issue as AI in universities. School students are younger, more vulnerable, and less equipped to critically evaluate AI outputs. The potential harms – to wellbeing, to academic development, to equity – are arguably greater in K-12 than in higher education. Yet universities get a framework backed by a regulator, and schools get an aspirational document with no teeth.
Where next for both frameworks?
The emergence of the HE framework comes at an interesting moment. The National AI Plan, released in December 2025, positions education as a national priority and explicitly references the schools framework. We’re also seeing state-level developments like NSW’s EduChat and South Australia’s EdChat – government-built AI tools that may eventually replace (or prohibit) commercial platforms in schools.
This raises questions about governance and coordination:
Will the Frameworks converge?
There’s a strong case for convergence. Students don’t experience education as separate “school” and “university” phases – it’s a continuum. Foundational capabilities developed in Year 10 should connect to adaptive skills emphasised in first-year university courses. Academic integrity expectations shouldn’t shift dramatically at the school-university transition.
At the same time, the differences between school and university contexts are real. Universities deal with research integrity, postgraduate supervision, and academic freedom in ways that don’t apply to schools. Schools deal with mandatory attendance, duty of care obligations, and parental involvement in ways that don’t apply to universities.
The solution might be a tiered framework: shared principles at the top level (human-centred, ethical, equitable), with implementation guidance tailored to specific contexts.
Who will lead?
The schools framework was developed by the National AI in Schools Taskforce, with representatives from jurisdictions, sectors, and national agencies. The HE framework was developed by ACSES in collaboration with the Taskforce. Moving forward, who owns this work?
The National AI Plan suggests that “the Department of Education and the Department of Employment and Workplace Relations will also explore ways to equip learners with the skills and credentials to participate in an AI-driven workforce.” This is federal-level coordination, which is promising. But education is primarily a state and territory responsibility in Australia, and coordination across jurisdictions has historically been… challenging.
We need clearer governance structures. Schools don’t have an equivalent federal regulator to TEQSA (and probably shouldn’t – the political debates around a national curriculum have shown how fraught centralisation can be). But we do need some mechanism for ensuring consistency, sharing resources, and holding systems accountable.
In a future article I’ll write about what the new Higher Education Framework means for K-12, and how it should help to inform school policies and AI Guidelines in 2026.
Subscribe to the mailing list
As internet search gets consumed by AI, it’s more important than ever for audiences to directly subscribe to authors. Mailing list subscribers get a weekly digest of the articles and resources on this blog, plus early access and discounts to online courses and materials.
No data sharing, no algorithms, no spam. Unsubscribe any time.

Leave a Reply