On Friday I told a room full of school leaders that they didn’t need an AI policy, you could hear a pin drop — right up until the point people started to laugh and nod their heads.
I was speaking with members of Independent Primary School Heads of Australia (IPSHA), in a series of sessions which covered AI progress, risk and safety, and the practical use of GenAI platforms like Copilot and Gemini. Many of the people in the room are directly involved in school governance. A number of them already have AI policies in place, often written reactively in 2023 as a response to ChatGPT.
So hearing those words — you don’t need an AI policy — obviously struck a chord.
I’ve said the same thing to vocational education and university leaders, as well as executive teams of businesses in industries like finance and health. The reaction is normally pretty similar: everything ranging from amusement to shock to outrage.
Writing an AI policy seems to be pretty high on the agenda for most organisations at the moment, but I don’t think it’s a good idea — and in this article I’m going to explain why.
Welcome to the 90s
I was in primary school when the world wide web emerged, suddenly giving global access to the kind of networked communications previously reserved for academics, governments and the military.
I remember the first RM Connect PCs being installed in my primary school. I remember using Logo to drive that little turtle around, and I remember the early days of the web fondly.

What I don’t remember is the sheer volume of policy documents that must have been written in the 90s, because luckily I was too young to care. I almost wish I was still too young to care, because I suspect the same patterns are now unfolding with artificial intelligence.
Recently, I had a dig through the Internet Archive to see if I could find a few IT policies from the late 90s that attempted to govern this newfangled internet thing. Some of them make for hilarious — and occasionally depressing — reading.
For example, in 1992, the US government released the NSFNET Backbone Services Acceptable Use Policy – essentially the rules for the entire early internet. It explicitly banned all commercial activity on the network. Every business transaction. Every invoice. Every sale. The foundational elements of the modern digital economy were literally against federal policy. The thing the internet would become was made illegal from day one: something which was obviously peeled back over time.
In 1997, around the same time I was leaving my primary school’s RM C-Series behind and moving into the IT labs of secondary school, the University of Pennsylvania had to explicitly ban students from physically drilling through dormitory walls to splice Ethernet cables and set up rogue routers. That’s a real clause in a real policy – “unauthorized wiring” – because students were literally punching holes in buildings to share bandwidth.
A year later, in 1998, the US Patent and Trademark Office released an internet usage policy that required patent applicants to submit a physical, written letter to give the government permission to send them an email. You had to send a paper letter through the postal service to authorise the government to use “electronic mail”. The policy warned, in bold bureaucratic language, that internet communications were “neither encrypted nor secure” and that anyone who chose to email the government did so “at their own risk.”
But what strikes me is not that these policies now seem naive or quaint or hopelessly outdated. It’s how much I’m reminded of the AI policies that education institutions and businesses are trying to write post the 2022 ChatGPTpocalypse.
Twenty or thirty years down the track, with the gift of 20/20 hindsight, we can see how hard people were working in the 90s to treat the internet — or the web — as a discrete entity that could be managed.
With that same hindsight, we can see how the management of internet technologies in this way quickly became a hopeless endeavour: through the dot-com boom and bust of the early 2000s, through the Web 2.0 era, the emergence of social media, and beyond.
Most organisations no longer have an “internet policy”, because the internet affects and often underpins every aspect of the organisation’s business. Organisations no longer have an internet policy; but their marketing, communications, procurement, hiring, HR, safety and financial policies all have the internet in them.
The quicker we reach this conclusion with AI, the better.
Stay up to date with news and teaching resources about deepfakes, AI generated images, and other GenAI content.
Join thousands of educators for the weekly newsletter:
A system-level technology
Artificial intelligence, like the world wide web, is a system-level technology. It integrates with and operates both above and below our current digital connected infrastructure.
Since 2023, I’ve been working with organisations, schools and universities in an advisory capacity. This often starts with a conversation about AI policy. As quickly as possible, I try to reframe this conversation.
Policies are concrete. Hard to update. And often written by the wrong people. Those that end up delivering the outcomes of the policy are rarely in the room when policies are written. Policies have to be ratified, signed off by executive teams, boards, and sometimes lawyers.
Here’s what I suggest we do instead:
- Produce flexible, lightweight guidelines on the use of AI by members of the organisation — such as teachers, administrative staff, students and the broader community in schools.
- Audit and update the policies we already have.
We have an AI policy at home

Somewhere on a networked hard drive, in SharePoint, a Google Drive folder, or god forbid a filing cabinet, you have every policy you could ever need with regards to artificial intelligence.
These policies were born in the fires of the late 90s, hardened against 30 years of cyber threats, social media, phishing emails, and staff seemingly deliberately trying to install viruses on Windows laptops.
You’ve got digital fair use agreements. You’ve got cyberbullying policies. You know what to do in the case of a data breach.
I want you to lay every one of these policies on the boardroom table and grab a big red pen, because it’s editing time.
Here are just a few of the policies that I recommend schools look at when we start this process. If you’re a university or a business executive reading this, I’m sure you have many of these policies and more:
- Academic integrity
- Assessment submission and marking policies
- Cybersafety and digital consent
- Digital user agreements (staff, student, parent)
- Data breach
- Cyberbullying
- Marketing and comms: social media
- External/outbound comms
- Data storage and security
- Software procurement
The point is: any policy which already intersects with the internet, digital systems, online cloud-based services, or even local on-premises hardware now intersects with artificial intelligence.
This might sound daunting, but think of it this way: you’ve already put all the hard work into writing and ratifying these policies. I’m just asking you to add a dot point or two.
Audit and update
At this stage, feel free to use the AI platform of your choice to help with this audit. It’s a big job and I’m sure you’ll take all the help you can get. Just make sure you’re doing it the right way.
Don’t throw 50 policy documents at Microsoft Copilot or Gemini-Fast and expect it to do a good job. You’ll end up with a dot-point list filled with emojis and about 50 hallucinations per 100 words.
You’ll want an AI model with a bit of grunt — a Gemini 3 Pro, Claude Opus 4.6, ChatGPT 5.3, or Copilot running the higher-end OpenAI or Anthropic models — a thinking or reasoning model, maybe a deep research model. If your organisation has access to it, this could be a deep research model connected to SharePoint or Google Drive with direct access to files. If not, it needs to be a model or an application — like a Claude Project, ChatGPT custom GPT, or Gemini Gems or NotebookLM — which can handle multiple files. Of course, make sure that whatever system you use has solid data privacy T&Cs, and sits within your enterprise’s existing policies…
If you do decide to use AI, you also need strict human oversight and review. These are sensitive and important documents we’re working with, and you can’t afford to rely on a chatbot to do all of the work.
Lay everything on the table – real or virtual – and get to work. Consider local and national frameworks, such as the Australian Framework for GenAI in Schools if you’re an Australian K-12, or state and federal government guidelines. Look to existing regulatory and legal advice for high risk areas like deepfakes and data breaches. You will find that your existing policies account for a lot of “what’s the worst that could happen?”, but that AI adds a new spin on many of these areas.
For example, the primary risk of deepfakes is the creation and sharing of nonconsensual explicit images. When the image is of a minor, this is covered by existing CSAM laws, but in some cases – such as the creation of images of adults – laws may have changed as recently as 2024-2025.
With data storage and data breaches, many of the risks associated with AI are more like traditional cybersecurity: the breach and sharing of data via unauthorised use of cloud-based services, for example. I’m willing to bet that you already have a policy that says something to the effect of “staff cannot put personal identifiable information into unsecured platforms”, since that has been a legal requirement for much longer than ChatGPT has been on the scene.
Do it right, and keep it up to date
If you try to write an AI policy now, it will be out of date in 3-6 months. The technology moves so quickly that 2023 policies dealing with “chatbots like ChatGPT” couldn’t possibly account for recent advances like Codex, Claude Code, and OpenClaw, all of which have a much higher risk profile than a browser-based chatbot.
But you don’t need to stay ahead of every technological leap. If you think about what risks these “coding agents” present, they are still probably covered by your existing policies: the deliberate or accidental sharing of sensitive data, system-level access to infrastructure, exposing data to potential “bad actors” online, and so on.
Write some flexible AI Guidelines which govern staff, student, and community use of AI. Point those guidelines back to existing policies when necessary, and then try to keep the guidelines as up-to-date as possible without getting too bogged down in “which AI can do what now?”
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply