What Curriculum Leaders Need to Know About AI in 2026

multicolored abstract painting

Most of the AI professional development I see in schools is aimed at everyone. Whole-staff sessions covering the basics: how to use GenAI, how to write a prompt, some tools you might find useful…

At the other end of the scale, you see policy sessions aimed at ICT and Business Managers, executive teams, and boards. And that’s fine. I run plenty of those sessions myself. But they’re rarely sufficient for the people who actually have to make decisions about curriculum and assessment.

If you’re a Head of Curriculum, a Director of Teaching and Learning, a Deputy Principal Academic, or an Assessment Coordinator, you’ve probably sat through several of those sessions by now. You know what Copilot is from the all staff intro-to-AI. You’ve had about as much “prompt engineering” PD as you can handle. You’ve sat in the exec team meeting where a draft policy gets handed around and questions get asked about procurement and privacy.

But we know that schools lead from the middle, and that as far as AI use is concerned the majority of that middle-space is held by curriculum leaders. At the end of the day, you’re the ones left enforcing the policy: calling parents after they’ve taken an “AI detection tool” problem up the line from classroom teacher to faculty leader. Trying to convince your staff that they really shouldn’t use Claude even though you know it’s better than the “approved” Copilot.

Curriculum leaders, basically, cop the fallout from both pedagogical and policy decisions regarding AI in schools. I was a Head of English for five years, and then a Director of Teaching and Learning. Almost my entire education career has been in curriculum leadership, and so that’s normally the perspective I bring to AI. Here are the areas I think curriculum leaders need to be across right now.

Assessments are more vulnerable than we think

I’ve run the “attack your assessments” exercise with dozens of schools at this point. The process is simple: take your major summative tasks, hand them to someone confident with AI tools, and ask them to complete as much as possible, as quickly as possible, from the perspective of a student deliberately misusing the technology.

The results are consistently sobering. Tasks that teachers believed were “AI-proof” because they involved personal reflection, or local context, or discipline-specific knowledge, often fall apart within minutes. And the technology has only improved since I started doing this in 2024.

This process is about confronting the reality of the technology so you can make informed decisions. If you haven’t done an honest audit of your major assessments, that’s the first thing I’d suggest. Not to throw everything out, but to know where you stand.

I wrote about this process in detail in my GenAI strategy series for faculty leaders, and it remains one of the most practical starting points for any curriculum team.

Assessment reform is a curriculum leadership problem, not a teacher problem

Individual teachers can tweak their own tasks, but the systemic questions sit with you. Should the faculty adopt a structured approach like the AI Assessment Scale? How do you ensure consistency across year levels and subjects? What do you communicate to students and parents about how AI can and can’t be used?

These decisions require someone with oversight of the whole program, not just one classroom. And they require navigating the tension between teachers who want to ban AI entirely and those who want to use it for everything.

In our recent JALT commentary on the AI Assessment Scale, Mike Perkins, Jasper Roe and I outlined seven suggestions for effective implementation. The first is to audit the broader validity of your assessments before even thinking about AI levels. The second is to decide the appropriate level per task, then redesign the brief, evidence trail, and rubric to fit that choice.

This is curriculum design work, and it belongs with curriculum leaders.

A colorful table titled 'The AI Assessment Scale' with five levels of AI usage in assessments, including descriptions for each level: No AI, AI Planning, AI Collaboration, Full AI, and AI Exploration.

You need a faculty-level strategy, not a school policy

School-wide AI policies are often out of date the moment they’re set down on paper. They tell you what the school’s position is, but they don’t tell you what to do in Year 10 Science or Year 8 English when faced with AI-related issues and opportunities.

Every subject is affected differently by GenAI. What works in English is entirely different from mathematics, design technology, or music. The ethical considerations vary. The practical applications vary. The assessments vary.

This is why I’ve long argued that faculties need their own strategies. A faculty-level AI strategy doesn’t need to be a 30-page document. It needs a clear vision, an honest assessment of where AI affects your discipline, some tested approaches (small experiments, not wholesale revolution), and a plan for communicating that to students and parents.

If you’re the person responsible for curriculum across multiple faculties, your job is to make sure each team has gone through this process, not to do it for them.

The “five principles” still hold

Late last year, I put together five principles for rethinking assessment with generative AI. They were:

  1. Validity first. Design assessments that generate trustworthy evidence of learning. Content validity, construct validity, consequential validity. This was important before ChatGPT and it’s important now.
  2. Design for reality. Create authentic assessments that reflect real-world workflows where AI may legitimately be used. Acknowledge that GenAI is more capable and harder to detect than most educators think.
  3. Transparency and trust. Communicate clear expectations about AI use. Trust that students want guidance and will generally do the right thing when given clear boundaries.
  4. Assessment is a process, not a point in time. Move away from single high-stakes moments. Build chains of evidence over time.
  5. Respect professional judgement. Resist rigid rules and surveillance technologies. Trust teacher expertise.

These principles are pro-learning, not anti-AI. And they’re principles that matter regardless of whether students use generative AI. For curriculum leaders, they provide a useful lens for evaluating any proposed change to assessment practice.

You don’t need to be an AI expert, but you do need to be AI-aware

I’m not suggesting that every Director of T&L needs to be prompting Claude at midnight. But you do need to understand what these tools can do, what they can’t do, and how quickly they’re changing.

If you haven’t used a current-generation model (Claude 4.6, GPT-5, Gemini 3) to attempt one of your own school’s assessment tasks, I’d strongly recommend it. Not because it’s fun (though it can be), but because it’s impossible to lead change around something you haven’t experienced firsthand.

You also need to be across the Australian Framework for Generative AI in Schools if you’re working in an Australian school, or relevant local and national frameworks otherwise. These are the frameworks your community will expect you to be using and they can help to get you through some complex conversations with parents and leadership teams.

Change leadership is the real job

Like many things in the curriculum leader’s role, the hardest part of all of this isn’t the technology; it’s the people.

You’ve got teachers who are anxious about AI and teachers who are (overly) enthusiastic about it. You’ve got parents who want to know what the school is doing. You’ve got students who are already using these platforms whether you’ve addressed it or not. And you’ve got your own workload on top of everything.

Leading change in this space means being comfortable with uncertainty, being honest about what you don’t know, and creating the conditions for your teams to experiment safely. It means communicating clearly and often. It means modelling the kind of critical, thoughtful engagement with AI that you want to see from your staff and students.

None of this is easy. But it’s exactly the kind of work that curriculum leaders are positioned to do, if you’re given the time and support to do it well.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨

Leave a Reply