Artificial Intelligence Policy in Secondary Schools

As artificial intelligence technologies like ChatGPT and Microsoft’s new Bing search engine enter our classrooms, it is important for schools to develop clear and robust policies and guidelines. Having clarity over the acceptable and responsible use of AI is as important for students as it is for teachers, and can provide parents and the wider community with a sense of how AI is being brought into the school.

Important update to the OpenAI terms and conditions: OpenAI has changed the age limit from 18+ to 13+ with parental consent. Read more in the T&Cs here.

Policy update: since writing this article in February there have been many developments which help guide policy. I’ve written about the National Framework for Generative AI in Schools here, and the UNESCO Guidelines here.

Developing a school AI policy

This post explores some key areas to consider when introducing artificial intelligence. Specifically, this post deals with generative AI technologies like ChatGPT and image generation. The suggestions here should form part of a wider discussion around updating your existing cyber/digital policies, and should involve members of your school community including parents and students.

To produce this article, I reviewed AI policies from various sources at both tertiary and K-12, as well as sources such as the OECD, European Council, and ‘responsible AI policies’ from organisations like Microsoft. It is important that you adapt these for your own context. I will also be maintaining this page with updates, so please use the contact form at the bottom to send through suggestions on any areas I’ve missed.

Each section offers a brief description followed by some practical applications to help guide the conversation. Note that all of these suggestions begin from the standpoint that the school encourages and accepts AI use as long as the ethical considerations are clear.

On November 8th I’ll be running a webinar on how educators can use image generation in their day-to-day work. Check it out on eventbrite.
Ethical considerations, transparency and accountability

All users must be aware of the ethical considerations of AI, particularly concerning the bias and potential for discriminatory output in AI such as Large Language Models. Schools should ensure that the limitations and consequences of AI are explored through professional development and lessons incorporating discussions of AI ethics. The school leadership must accept the ultimate accountability for AI systems used in the school, and teachers must assume accountability for the use in classrooms. Students should also be accountable for the appropriate use, including through cyber safety/digital agreements.

Practical applications:

  • Provide students and teachers with guidance on ethical considerations when using AI, including discussions around fairness, transparency, and accountability.
  • Establish guidelines for the use of AI in decision-making processes, and ensure that all stakeholders are aware of the criteria and methods used to make decisions.
  • Develop a system for reviewing and monitoring the use of AI technologies in the school environment to ensure that ethical considerations are being taken into account, for example a register of AI technologies.
Data privacy and security

A school policy on AI should address issues related to data privacy and security, as the use of AI technologies often involves the collection, storage, and analysis of large amounts of personal data. In many cases this will require an update of existing data security policies. For Large Language Models like ChatGPT, this can also include issues with prompting the AI with additional private data, e.g., avoiding providing identifying student data. Schools need to take steps to ensure that personal information is kept secure and that student privacy is protected. Where specific policies (e.g., offshore data storage) are in place, these must also be considered.

Practical applications:

  • Establish policies and procedures for the collection, storage, and sharing of student data, including guidelines for data access and retention.
  • Provide training to teachers and staff on how to handle personal data in compliance with relevant privacy laws and regulations. Include training on why it is inappropriate to include personal data in LLM prompts.
  • Ensure that appropriate technical measures are in place to protect against data breaches and other security risks.
Access and equity

Artificial Intelligence technologies may require access to devices and services, some of which may incur a fee. Schools must address issues related to access to technology and curriculum integration. This includes equitable access to the required technology and resources that support the integration of AI into the curriculum. The technology should also be equitable from the perspective of accessibility.

Practical applications:

  • Where possible, the school should provide students with access to appropriate technology and resources, such as AI software and tools, to support their learning.
  • Develop and implement a curriculum that integrates AI through accessible apps and software.
  • Provide professional development opportunities for teachers which explore the benefits and limitations of free and open access technologies, and technologies which can be used via an education license (for example Microsoft products).
Acceptable use

There are some situations where it is appropriate to use AI, and others where the limitations of the technology make it inappropriate. Sometimes, students will need to work on foundational skills before using AI tools. It is important that school policies articulate acceptable use of various AI tools in the classroom. Similarly, policies should indicate acceptable use for teachers and staff.

Practical applications:

  • Provide students with guidance on acceptable and responsible use of AI technologies, including discussion of the limitations and potential risks of using AI tools for research and writing.
  • Provide guidance for school staff on the acceptable use of AI technologies. For example, lesson planning may be supported by but not replaced by AI, due to its inherent biases and limitations.
  • Encourage the use of AI-generated material as a prompt for inspiration or guidance, rather than as a replacement for critical thinking and analysis.
Academic integrity

Academic integrity encapsulates many other areas, including citation and referencing, plagiarism, and cheating. AI policies should include guidance on forming academic integrity modules, professional development, and general school approaches to academic integrity.

Practical applications:

  • Develop guidelines for appropriate use of AI technologies in research and writing, and ensure that students understand the expectations for academic integrity when using AI technologies.
  • Work with faculties and library staff to develop academic integrity modules appropriate for all subject areas. Discuss what “academic integrity” looks like across the curriculum.
  • Develop policies and procedures to monitor and address suspected breaches of academic integrity related to the use of AI technologies. This may include moderation and benchmarking processes as well as procedures for following up on breaches.

Assessment processes

AI technologies are forcing us to reconsider certain assessment processes, particularly the use of unsupervised written assessments, multiple-choice questions, and examinations. The AI policy should inform some of the school’s discussion around approaches to assessment, including what should/should not be supervised, formative assessment methods, the use of “real world” or authentic assessment, and so on.

Practical applications:

  • Teachers should specify assessment conditions for the use of generative AI at the start of the teaching period/unit of work.
  • Define restrictions on the use of generative AI for assessment tasks based on educational reasoning, the nature of the task, and the function of generating particular evidence of student learning.
  • Provide clear guidelines for appropriate use of generative AI tools in assessment tasks and ensure that students understand the expectations as per the other areas suggested here.

Citation and referencing

Develop guidelines for the appropriate citation and referencing of AI-generated material in research and writing, and ensure that students understand the importance of accurately citing AI-generated material. Citing tools like ChatGPT is still ambiguous. Although some journals have accepted ChatGPT as an “author”, most publishers recommend against this. Rather, AI tools should be included in the reference list or for more academic writing in the introduction/methods.

Practical applications:

  • Provide students with guidance on the appropriate citation and referencing of AI-generated material in research and writing.
  • Develop guidelines for appropriate citation and referencing of AI-generated material, and ensure that students understand the importance of accurately citing AI-generated material.
  • Develop or augment existing research modules, such as those often taught by the school library.
Accuracy and credibility

School AI policies should acknowledge that generative AI such as Large Language Models have known issues with accuracy and credibility. This includes understanding the limitations of AI and ensuring that the credibility and reliability of AI-generated information is verified before it is used in research and writing. This also includes checking the output of AI when it is used in communications (e.g., school newsletters) and the development of administrative resources (such as an AI policy)

Practical applications:

  • Provide students with guidance on accuracy verification when using AI-generated information in research and writing, such as how to manually check sources and links.
  • Encourage the use of multiple sources to verify the credibility and reliability of AI-generated information.
  • Develop guidelines for accuracy verification as part of research and academic integrity modules, and ensure that students and staff understand the importance of verifying the accuracy of AI-generated information.
Professional development

Ensure teaching and non-teaching staff have access to quality professional development around these emerging technologies. Identify professional development which explores both the practical applications of the technology and the ethical considerations.

Practical applications:

  • Provide training and support for all staff on the use of AI technologies, particularly in research and writing.
  • Identify opportunities for ongoing professional development related to the use of AI technologies in the school environment including administration and areas where AI could support teacher workload.
  • Encourage teachers to share best practices for incorporating AI technologies into their teaching.
Community engagement

A school policy on AI should include provisions for community engagement, to ensure that all stakeholders are informed about AI use and have the opportunity to provide feedback and ask questions. This includes engaging with parents, students, and other members of the school community to ensure that they are informed about the use of AI technologies in the school.

Practical applications:

  • Develop a communication plan to inform parents, students, and other members of the school community about the use of AI technologies in research and writing.
  • Provide opportunities for stakeholders to ask questions and provide feedback on the use of AI technologies in the school environment.
  • Use stakeholder feedback through surveys and focus groups to inform ongoing evaluation and improvement of AI use in the school environment.
Reviewing and updating the policy

As AI technology is a rapidly expanding field, any AI policy should be treated as a working document. The policy should be reviewed and updated regularly, to ensure that it remains relevant and effective in the use of AI technologies in the school environment. This includes identifying emerging trends in the use of AI technologies, evaluating their impact on student learning and engagement, and updating policies and procedures as necessary.

Practical applications:

  • Establish a process for regular review and update of the school policy on AI. This may include establishing a subcommittee of teaching/non-teaching/leadership staff.
  • Monitor emerging trends in the use of AI technologies in research and writing, and evaluate their impact on student learning and engagement.
  • Update policies and procedures as necessary to ensure that they remain relevant and effective in the use of AI technologies in the school environment.

Creating a policy document

The following document template serves as an example of how the above points could be used to form a complete policy document on AI. Note that the example is not intended to be used as-is: you should discuss all of these areas as a school community to develop a policy in your context. There may also be important areas which are not covered here. I will be adding to and editing this page as people provide input.

This template was generated by taking the above points and running them through ChatGPT. I’m also outlining the entire process here as it could be a process you follow in your school:

  1. I researched various policies from secondary and tertiary and took notes on the areas covered.
  2. I used these notes to write the above sections, adding practical applications from my experience as a teacher/school leader.
  3. I copied the above sections into ChatGPT, and asked for the outline of a policy document plus sections for purpose, scope, and responsibilities.
  4. I prompted ChatGPT for expanded details for the first sections.
  5. I prompted for “points to discuss” for the remaining sections, based on the above information.
  6. I manually checked the output against the above sections, and formatted them in a word document.

If you or your school team would like to discuss Artificial Intelligence, or if you have any suggestions, questions, criticisms, or comments on this post, please get in touch below:

In March, I’ll be running a Professional Development for educators on Teaching Writing in the Age of AI. It will cover teaching writing with and without AI, assessment, integrity, and how to incorporate AI into writing units. Click the image to register your interest for that PD.

Register your interest to stay informed about the upcoming PD

Further Reading

Deakin University – Referencing

Monash University – Guidelines for AI acceptable use

EU Council Press Release on AI in Education

OECD – Artificial Intelligence

Microsoft Responsible AI

5 responses to “Artificial Intelligence Policy in Secondary Schools”

  1. […] AI in all sorts of applications, including text and image generation. Many universities have already adopted policies which permit students to use AI as long as it is correctly acknowledged. Workplaces from marketing […]

  2. […] highlights potential areas of conflict with these guidelines, demonstrating the importance of clear policy and ethical considerations when integrating AI into […]

  3. […] Common Sense Media: ChatGPT and Beyond: How to Handle AI in Schools FlexSchool Policy How to develop a policy – sample template at the end  Edsurge: Getting started with AI in the classroom ISTE secondary classroom instructional […]

  4. […] and opportunities of Generative AI. I’ve been working with schools on developing their own policies and guidelines, and we are now incorporating elements of the draft framework to align with these national […]

  5. […] like my PhD, overlaps with writing. Some, like the broader ethical concerns of GAI and my work on school and organisational policy extends beyond […]

Leave a Reply

%d bloggers like this: