The draft schools AI framework for education is currently open for feedback, alongside a House Standing Committee parliamentary inquiry into Generative Artificial Intelligence in education.
The draft framework represents an important step for schools trying to adapt to the challenges and opportunities of Generative AI. I’ve been working with schools on developing their own policies and guidelines, and we are now incorporating elements of the draft framework to align with these national principles.
Schools are looking for ways to take these principles and turn them into actions. This can be particularly difficult with Generative AI, which is an ethically complex technology that not even its creators fully understand. But the industries behind the technology will not stand still. Calls for “pauses” in AI development have been quickly forgotten, and schools are already beginning to trial the technologies in the classroom.
Like many others, I have concerns over the implementation of these technologies into the classroom without more research into the potential impact on students and teachers. My own research will hopefully go some way to understanding those implications.
While we wait for the research, however, we also know that students and teachers will increasingly be using the technologies. We need to find ways to take the principles outlined in the draft national framework, and put them into action.
In this post, I explore the six key principles outlined in the draft framework and make some recommendations for how schools can put them into practice.
Overview of the principles
Here is a list of the core elements, which also serves as the table of contents for the rest of this post:
Core elements
- Teaching and learning
- Human and social wellbeing
- Transparency
- Fairness
- Accountability
- Privacy and security
At the end of the post, I’ve turned the recommendations into a few practical examples, including a sample register of AI apps, a timetable for PD, and a GAI assignment for Visual Arts.
1. Teaching and learning
Right at the top of the list is the focus on Generative Artificial Intelligence for Teaching and Learning. This core element is broken down into a positive impact on teaching and learning outcomes, instruction in the technical elements, strengths, and limitations of GAI, and a focus on enhancing human cognition.
There is further elaboration which stresses that the positives of GAI must “clearly outweigh risks or negative implications,” and that one way of doing this is through effective instruction which helps students to avoid harm and use the technology in ways which are responsible and accountable.
The focus on human cognition reflects concerns that GAI will lead to an offloading of critical and creative thinking skills, replacing rather than enhancing students’ capacity to think for themselves.
A lot has been said about the importance of teaching digital skills, coding, algorithmic thinking, and the technical aspects of AI. Some, like Bill Gates, believe that all students need to be taught computer science from an early age.
I think a more nuanced approach would be appropriate – schools don’t need to rush into investing in complex AI programs, new subject areas, or expert instruction. In fact, many schools in Australia already struggle to deliver just the basics of digital technologies as required by the Australian Curriculum. There are simply not enough digital technologies teachers (or teachers, full stop), to account for new courses in AI.
Instead, my recommendations focus on where GAI instruction can be worked into existing subject areas, and the kind of support teachers and schools will need to do so.
Recommendations
- Determine who is responsible for the implementation of GAI into the curriculum. This might be a Director of Teaching and Learning or equivalent role, a dedicated subcommittee or team, or another individual appointed specifically for the role. In any case, that person or team should have access to the latest research through academic journals and institutions like the University of South Australia’s C3L and Deakin’s CRADLE.
- Learning about how GAI works does not need to be a separate subject: it can be integrated into existing disciplines. GAI text generation, for example, may fall to the English department, while image generation goes to the Visual Arts and video generation to Media Studies. While interdisciplinary subjects in GAI seem attractive, the current curriculum, timetables, and school structures may make them impractical. Work with what you have now, and consider how subject disciplines might change in the future.
- Invest in professional development in creative and critical thinking skills. There are many opportunities out there which predate this wave of technology, and which focus on teaching and learning practices that prioritise these skills.
- The Teaching and Learning team should have frank discussions about where GAI is and is not needed in the curriculum. There will be many occasions where it is preferable to have students working technology-free: that has been the case since long before GAI was on the agenda. These conversations will overlap with discussions of assessment and academic integrity.

2. Human and Social Wellbeing
This core element advises schools to focus on the potential harms of GAI and to mitigate these risks by supporting student wellbeing, ensuring a diversity of perspectives, and respecting human rights and the autonomy of individuals.
Of all the elements, this is perhaps the most esoteric and difficult for schools to approach. There are significant ethical concerns with GAI on a scale which schools can do little to address. Systemic biases are reinforced the in datasets which power these models, and despite the guardrails put in place by developers may be replicated in the output.
While the draft framework highlights these issues, there are as yet no concrete suggestions for addressing them. Again, these systems will be implemented in schools before these fundamental issues are addressed on a technological level: in effect, schools will have little choice but to contend with the problems.
Schools can, however, exercise some control over how and where GAI is implemented, and the conversations had with students, staff, and the community.
Recommendations
- Through staff professional development and student instruction, ensure that teachers and students are aware of the inherent biases and potential for discrimination in GAI systems. These ethical concerns exist at the core of the technology, and should be foregrounded in all uses of the technology.
- Consciously analyse GAI output for potential biases, including through instruction in various subject areas. For example, as part of language analysis in English, students could look for potential bias, discrimination, and stereotyping in generated text. Similarly, students could identify and discuss biased representations in image generation.
- Understand that Generative Artificial Intelligence represents a particular worldview, drawing on a dataset of predominately English language text written by white, Western males. Understand the implications of this, particularly for EAL, culturally diverse, and otherwise marginalised students.
- Discuss the implications of bias for assessment practices. Consider how neurodivergent students may be underrepresented in datasets and the implications that might have for students using GAI to express themselves. Invest in professional development specifically designed with EAL, neurodivergent, and otherwise marginalised students in mind.
- Do not implement Generative Artificial Intelligence into classrooms and assessment practices until clear school policies and guidelines have been written with input from staff, students, and the school community.
- Advocate for products and services which have clear approaches to dealing with these ethical concerns. It is highly likely that GAI will be introduced into schools via existing platforms and services such as Microsoft and Google products, Learning Management Systems, and related applications. School and sector leaders interact with these companies at various levels, and as consumers of these products have a right to ask these difficult questions.
3. Transparency
The draft principles highlight the need for transparency both in how the technologies are used, and how they work. Overlapping with the first two areas, this involves providing clear information about any GAI tool in use, informed consent, and “explainability”.
Again, there are issues here which schools cannot contend with. The explainability of GAI systems, for example, is so complex that even the companies who develop the technologies are uncertain of how they actually work. To account for this, the principles highlight that the ability to explain the reasoning behind AI decisions and the impact on individuals should be separated from the “interpretability” or inner workings of the GAI model.
In essence, this means that schools must be able to explain how and why GAI is being used and the impact on any decision-making in schools, such as assessment practices and curriculum decisions. This may apply to the use of personalised learning chatbots, academic streaming as a result of AI decisions, and so on. There is some crossover in this framework between Generative AI, like ChatGPT, and broader applications of AI such as predictive algorithms.
Recommendations
- Ensure that all use of Generative Artificial Intelligence is by informed consent, with the opportunity to opt out. While this may complicate curriculum processes – for example where GAI is used in a classroom activity or assessment – it is a necessary process for such an ethically complex technology.
- Informed consent and transparent, plain English statements should be made available to parents and carers, possibly as a component of a school GAI policy.
- A register of GAI tools and services used by teachers and students should be maintained by an appropriate person or team. There are already thousands of “AI powered” apps and services available, and not all are safe or appropriate. I have included an example register at the end of this post.
- The register of apps and services should include provisions for teachers to add appropriate software, and should not limit the potential for teachers to explore new technologies or uses.
- Where GAI is to be used in assessment tasks – by teachers or students – approaches, appropriate apps and services, and exclusions should be made clear in the assessment documentation. For example, if students are permitted to use GAI in a written task, it should be outlined in the task sheet or instructions. If GAI is to be used by teachers, for example in assessment or reporting, this should be made clear in the appropriate documentation or on the LMS.
4. Fairness
Core element number four builds on Transparency and the ethical concerns of Human and Social Wellbeing and requires GAI to be used in ways which are accessible, fair, and respectful. There is potential for GAI to make certain processes in schools more accessible. For example, Natural Language Processing applications are being developed to help with assistive speech, image recognition for the vision impaired, and chatbots for personalised learning.
However, schools should not rush into implementing GAI for the purposes of personalised learning, differentiation, and modification. Given the ethical concerns outlined elsewhere, particularly for the potential discrimination in GAI, schools should be cautious in how and where GAI is used.
Element four also covers the important areas of academic integrity, copyright, and cultural and intellectual property.
Recommendations
- In line with other recommendations, keep a register of how and where GAI is used in teaching and assessment. Ensure that teaching and learning practices are accessible, for example working with Learning Support staff to identify whether certain students might be excluded from curriculum or assessment practices by implementing GAI.
- Be cautious of the current technology’s capabilities in terms of “personalised learning”. This is likely to be one of the areas which attracts the most development for GAI, which also means that it is the most open to companies and developers attempting to take advantage of schools.
- Develop clear and reasonable academic integrity policies, or extend existing policies to incorporate GAI. Universities are leading the way in developing these policies and guidelines, for example Deakin University and Monash University. Note that many of these policies do not exclude or penalise the use of GAI.
- Acknowledge that attempting to “detect” GAI is problematic on a number of levels. There is currently no effective detection software for GAI. Existing detection software has been demonstrated to discriminate against EAL authors. Detection also places an additional burden on teaching staff and heads of faculty.
- Work with local indigenous community leaders and identify indigenous research into the implications of AI for ways of knowing and cultural and traditional practices. Indigenous scholars and organisations are working on understanding the implications of these technologies and should inform school policies and guidelines.
- Respect the copyright of authors, creators, and artists. GAI systems have largely been developed with no regard for intellectual property, and existing models are founded on datasets containing copyrighted materials and IP. Acknowledge this issue in discussions with staff, students, and community and avoid uses which exacerbate the issue. For example, avoid uses of GAI in the nature of “produce writing in the style of…” or image generation which replicates the work of other artists, living or dead.
5. Accountability
The principles recommend that all decision-making ultimately remains in human hands. Sometimes, this is referred to a “human in the loop” process. As applied to GAI, this means that all practices or processes using the technology must have human oversight, and applications and services which automate decision-making should have an ultimate human authority.
Systems must also be operated in ways which are reliable and understood, and monitored to ascertain the impact on student learning. This area will be particularly difficult for schools. At this stage, I don’t think there is enough research to clearly identify the impact – positive or negative – of GAI on learning. As with all digital technologies, schools should avoid the assumption of cause-and-effect with GAI and work from the basis that GAI can improve or detract from learning through how it is used by students and teachers, and not through some inherent effect of the technology itself.
To monitor the impact of GAI on teaching and learning, schools will need to understand how and where students are using GAI. There will need to be judgements made about what knowledge and skills benefit from the use of GAI, and where it would be better for students to learn without the technology.
Recommendations
- As outlined with principle one, a person or team in the school should have the ultimate responsibility for GAI-based decision making in the school. This responsibility should be clearly outlined in the position description of the person or team, and should be supported with appropriate professional development.
- Senior leadership in schools should ensure clear communication with GAI providers to ensure that they understand the decision-making processes involved. For example, school leaders should attend the forums and panels offered by organisations like Microsoft and Google for education which provide opportunities to ask questions about reliability and transparency.
- Teachers should be provided with professional development to understand more about the strengths and limitations of GAI for assessment purposes. For example, while it is possible to enter student work into GAI like ChatGPT and ask for a summary or feedback, such use may transgress student intellectual property rights and the feedback provided may be impacted by the ethical and bias issues discussed earlier.
- All use of GAI for assessment and reporting purposes must be clearly understood by staff, students, and the community.
- Schools should establish guidelines for when and where staff can use GAI to support assessment and reporting practices. For example, GAI like ChatGPT can support reporting by turning dot-point notes into more fluent writing, but teachers using the application in this manner should be aware of privacy, IP, and related concerns.
- If schools elect to use GAI in assessment practices, policies should include provisions for students, parents, and carers to contest any decisions made with Generative AI, on the basis that these technologies may be inaccurate.

6. Privacy and security
In the schools I have worked with on GAI policy and guidelines, privacy and security have tended to be among the primary concerns alongside teaching and learning. Most schools have cybersecurity and data privacy policies which can be adapted to suit Generative AI, and which comply with Australian state and national regulations. However, there are some specific concerns related to GAI.
The final principle in the draft framework requires continued compliance with laws and regulations, as well as clear privacy disclosures and protections for students.
Recommendations
- Examine and update existing privacy and data security policies to ensure they are up to date with state and national regulations.
- As part of a school policy or guidelines for GAI, include a privacy disclosure statement in plain English for students, parents, and carers.
- Understand where data entered into GAI systems is stored, and how it is used. Companies such as OpenAI may use input into GAI systems for training purposes, meaning data entered by students and teachers becomes part of the dataset. Organisations have varying policies regarding the retention and use of data and should be understood before systems are used in schools.
- Understand the various opt-in and opt-out policies of GAI apps and services, and the relevant terms and conditions. Include these in the register of technologies suggested above, for example listing technologies approved for use in the school alongside the T&Cs for retention of training data. I have included an example register at the end of this post.
- Incorporate discussions of GAI into existing student information on cybersecurity and online privacy. Where external providers are used, e.g., for cybersafety talks, enquire as to whether they can incorporate discussions of Generative AI.
- Where necessary, deidentify any student data input into GAI systems. Provide staff professional development into what data can and cannot be entered into GAI. Never enter personal information about students into GAI systems without explicit informed consent from students and parents/carers, and with a clear understanding of where that data goes and how it is used.
- Only use systems which have a clear and transparent approach to the storage and use of input data. For example, identify systems which do not share data with third party providers, and which delete user data after 30 days.
- When using systems which interact with personal data, for example via integration into a Learning Management System or student data system, ensure clarity over how the data is stored and handled.
Putting the principles into practice: example strategies
While the principles are still in draft format, they provide a basis for a robust discussion in schools about how GAI will be implemented in classroom activities, assessment and reporting practices, and elsewhere. However, there is still a need to understand what these principles look like in practice. Through my recommendations above I have tried to identify ways schools can bring these principles down to earth, but I feel like a few worked examples are still in order.
The following examples demonstrate a few ways schools can take the principles and turn them into something more concrete.
Example 1: Register of GAI applications and services
Purpose: Maintain a comprehensive list of all GAI tools and services being used in the school, along with their features, purposes, and potential ethical concerns.
Format: A shared spreadsheet or database accessible to relevant stakeholders.
Tool Name | Purpose | Data Storage Policy | Known Risks | Approved for Use? |
ChatGPT by OpenAI | Text generation for projects in a variety of subject areas | Data deleted after 30 days if chat history is turned off, otherwise retained for training purposes | Bias in dataset, concerns related to training, potential privacy/security concerns | Yes |
Midjourney image generation | Image generation for art projects | Data stored indefinitely, no option to opt out | Biased gender and racial representations, possible sexual content… | No |
Microsoft Bing Chat | Text and image generation | Data stored and used for training purposes | As per ChatGPT, additional guardrails from Microsoft to filter inappropriate content… | Yes |
Example 2: Timetable of staff professional development
Purpose: Schedule training sessions for staff to acquaint them with GAI’s implications and potential in education.
Format: A calendar or timetable shared with all teaching staff.
Date | Topic | Trainer | Duration | Organised by |
10th Sept 2023 | Introduction to GAI | Jane Smith (External Expert) | 2 hours | Dir Teaching and Learning |
17th Sept 2023 | Ethical Implications of GAI in Classroom | Alan Brown (School Counsellor) | 1.5 hours | Dir Wellbeing |
24th Sept 2023 | Using GAI for Creative Projects | Emily White (Art Teacher) | 2 hours | Visual Arts Faculty |
1st Oct 2023 | GAI for Assessment: Pros & Cons | John Green (English Teacher) | 1.5 hours | English Faculty |
Example 3: GAI Parent and carers forum
Purpose: To educate both parents and carers about the advantages, limitations, and potential pitfalls of GAI in education.
Format: Series of interactive seminars and information.
Actions:
- Organise termly workshops (in-person or online) demonstrating how GAI tools are used in the classroom, with live demonstrations.
- Distribute information via online communication services detailing the school’s approach to GAI, including benefits, data privacy information, and ethical considerations.
- Offer a Q&A session after each workshop, allowing parents and carers to voice concerns or ask questions about GAI’s use.
- Ensure transparency by sharing which GAI tools are being used and for what purposes.
Example 4: Classroom Assignment: “Decoding Biases in AI Image Generation”
NB: This assignment was generated using ChatGPT by providing the principles as input and the following prompt: Create an example of a classroom assignment in visual arts exploring generative AI bias in image generation. Make sure it addresses the concerns of the. Use the image generator that is available via Microsoft’s Bing chat
Purpose: Students will explore the capabilities and limitations of Generative AI in image generation, particularly focusing on understanding biases and their manifestations. They will use Microsoft’s Bing chat image generator for hands-on experience.
Assignment Overview:
- Research & Discussion (Day 1-2): Begin with a class discussion about Generative AI, focusing on how it works and its prevalent use in modern digital art. Highlight potential biases and how these biases can manifest in generated images.
- Practical Exploration (Day 3-4): Students will use the image generator in Microsoft’s Bing chat to create a series of images based on different prompts. They should be encouraged to use a variety of subjects, cultures, and themes to see how the AI responds.
- Analysis & Reflection (Day 5-6): Students will choose two generated images: one that they believe shows AI bias and one that they believe is neutral or unbiased. They will write a reflection or create a presentation explaining their choices and the perceived biases or lack thereof.
- Group Review & Feedback (Day 7): In small groups, students will showcase their chosen images and discuss their findings. This promotes peer learning and broadens the understanding of AI biases.
Detailed Assignment Steps:
- Exploration:
- Access Microsoft’s Bing chat and familiarise yourself with the image generator.
- Generate at least 10 images based on a diverse set of prompts you devise.
- Selection:
- From the generated images, select one that you believe might be showcasing a bias. This could be in terms of culture, gender, race, or any other form of bias.
- Also select one image that seems neutral or unbiased.
- Documentation:
- Create a digital or physical portfolio of these images.
- Accompany each image with the original prompt you provided to the AI.
- Reflection:
- For the image showcasing bias: Describe what biases you perceive and how they are evident in the image. Discuss potential reasons for this bias based on your understanding of AI and the class discussions.
- For the neutral/unbiased image: Describe why you believe this image is neutral. Discuss its elements and composition.
- Group Review:
- Present your findings to a small group.
- Listen to the interpretations of your classmates.
- Engage in a discussion about the variety of biases observed and the potential reasons behind them.
Evaluation Criteria:
- Engagement with the Tool: Effective use of the Bing chat image generator.
- Analytical Skills: Ability to identify and explain biases or neutrality in the generated images.
- Reflection Quality: Depth of understanding showcased in the written or presented reflection.
- Participation: Active participation in group discussions and feedback sessions.
Conclusion
In this post, I have attempted to take the draft principles for AI in education and recommend a few ways schools can start to implement them. The approach needs to be focused throughout on acknowledging the strengths and limitations of GAI and the ethical concerns of the technology, particularly concerning bias.
Ultimately, all of these processes rely on appropriate professional development and support for teachers. Schools need to decide who will be responsible for outlining policies and guidelines, and for ensuring that staff, students, and the community are kept up to date.
The technology is moving incredibly quickly, and is already being trialled in schools. Research into its efficacy and the long-term impacts on teaching and learning will lag behind the deployment of the technology, which means that schools will need to take responsibility for ensuring they are implemented as ethically and responsibly as possible.
If you have any comments or questions about this post or the principles, please get in touch using the form below:
Postscript: The GAI learning curve
As I was putting the finishing touches on this blog post, something happened which highlighted the need for experimentation, trial and error, and an understanding of the limitations of GAI.
To make the images for this post, I used the image generator Midjourney. It’s subscription based and quite difficult to access as it runs in a Discord server, and as such I don’t recommend it for use in schools. However, it is very powerful and capable of generating incredible images.
Sometimes.
At other times, the unpredictability and lack of transparency over how these models work leads to frustrating and frankly bizarre results. trying to generate the cover image, I used the following prompt:
/imagine prompt: abstract illustration in purple and blue tones, education, artificial intelligence, blog header image, digital collage, secondary school –ar 2:1 –no faces –no robots
Here’s what it came up with:

Better keep working on it…
Leave a Reply