Shadow AI: Bringing covert AI use out of the dark

shadows of people

Every school I have worked with in the past two years has a shadow AI problem, whether they know it or not. Staff are using tools that haven’t been vetted; students are using tools that aren’t approved; parents are bringing AI assistants into parent-teacher interviews without asking. The patterns are so consistent across sectors, geographies, and school sizes that it is no longer useful to treat shadow AI as an exception or a failure of compliance. It is, instead, one of the defining governance challenges of this moment in AI adoption, and one that the updated VINE GenAI Guidelines for Schools 2026 directly address.

I wrote about the launch of the updated VINE Guidelines here; this post zooms in on one specific problem area and the framework we developed for it. If you want the full context, that post is the place to start, and the guidelines themselves are published here under a Creative Commons licence for any school to adapt.

Shadow versus covert

Shadow IT is not a new problem. Any IT manager who has been in the job longer than about six months has stories of staff signing up for a free tier of some SaaS product, entering organisational data, and only flagging it to IT when something breaks. What makes shadow AI different is the speed of adoption, the breadth of use cases, and the quality of the data that tends to get entered. A teacher who uses an unapproved transcription tool in a student support meeting is not just introducing a new vendor to the school’s data governance picture; they are potentially processing sensitive student information through a system nobody has vetted, under terms and conditions nobody has read, in a jurisdiction that may or may not offer equivalent privacy protections to Australia.

The interviews we conducted during the VINE update surfaced a distinction that I think is crucial, and that I had not seen clearly articulated elsewhere: shadow IT is not the same as covert IT. Shadow IT is someone trying a new tool without checking, often with good intentions, often because the approved option is inadequate or slow to access. Covert IT is someone using a tool they know they shouldn’t be using, and deliberately hiding it. The first is a governance opportunity. The second is a cultural problem. Most shadow AI sits squarely in the first category, but punitive responses turn the first into the second, which is the outcome schools most need to avoid.

The VINE Guidelines put it this way: the goal isn’t zero AI tool use outside the approved list. The goal is zero covert AI tool use. Once that distinction is clear, the design of the governance framework follows fairly naturally.

Three zones, proportional governance

The core of the VINE approach is a three-zone model that applies proportional governance to AI tool use based on the risk profile of what is actually happening. The zones are not about policing which platform is being used; they are about what the application is doing, and what data is involved.

Zone 3 is anywhere student data is involved. AI processing student work, learning analytics, or reports. This zone requires formal approval, privacy impact assessment, IT and leadership sign-off, and a register of all Zone 3 tools. Worth noting: personal or identifying information should never be entered into any AI tool, per the broader privacy principles in the Guidelines. Zone 3 governs tools handling non-identifying student data where formal approval is warranted.

Zone 2 is classroom-facing. AI used in teaching, generating resources, providing feedback, or any context where AI-generated content reaches students. Governance here tightens: tools must come from the approved set or be vetted through a fast-track process, professional learning is required, and AI use is disclosed to students. This is the zone where most teachers actually spend their time, and where most of the interesting practice sits.

Zone 1 is personal productivity. Staff using AI for brainstorming, drafting, organising notes, or handling administrative tasks, with no student data involved. Governance here is light-touch: general awareness, transparency, informal disclosure. Staff do not need to fill in a form to use Claude to help draft an email. The expectation is simply that they are open about their use, and that they don’t enter personal identifiable information into any tool.

The answer to “can I use this?” is no longer yes or no; it is “which zone does this fall into, and what does the governance for that zone require?” That shift moves the conversation away from permission-seeking and into a more productive discussion about risk, data, and purpose.

Download an image of the “Three Zones” approach here:

Infographic outlining the three zones of AI use: high risk involving student data, medium risk for classroom-facing use, and low risk for personal productivity. Each zone includes examples and governance requirements.

BMIT and the transition pathway

One of the most useful concepts we drew on from the enterprise literature is Business-Managed IT, or BMIT. The idea, developed in corporate environments, is that there is a middle ground between unsanctioned shadow use and full central IT control. BMIT describes a situation where a business unit (in our case, a teacher, a department, a faculty) takes ownership of a technology choice, but does so in the open, with shared responsibility for security and compliance alongside the central IT function.

For schools, BMIT provides a transition pathway. When shadow AI use is disclosed or discovered, the response is not immediate sanction; it is a structured conversation about whether the tool can be brought into overt, managed use. Some tools will turn out to be fine and can be added to the approved list. Some will need conditions attached (staff-only, no student data, time-limited). Some will need further assessment. A small number will need to be discontinued because they cannot meet the privacy, safety, or compliance requirements. All four pathways are legitimate outcomes, but they proceed from the same premise: that disclosure is valued, not punished.

The alternative, which is what most schools currently do, is to treat any unapproved use as a compliance breach. Staff who might otherwise have come forward with a useful tool stay silent, and the school loses the information it needs to manage its actual risk exposure. A punitive response drives shadow AI further underground, which is exactly the outcome the governance framework is meant to prevent.

The paved road

Connected to all of this is a principle borrowed from modern platform engineering: the paved road. The idea is simple. If you want people to use the official path, make it the path of least resistance. If the approved toolset is harder to access, slower to respond, or less capable than the shadow alternative, shadow behaviour is not a failure of staff discipline; it is the predictable outcome of poor service design.

For schools, the paved road means a self-service catalogue of pre-vetted AI tools that are immediately available on request, a tiered vetting process that matches the risk of the tool, and a genuine commitment to checking whether existing platforms (Microsoft 365, Google Workspace, the LMS) already offer a capability before introducing a new vendor. It also means treating shadow AI patterns as signal, not failure. When the same unsanctioned tool keeps showing up across a department, the school is being told something useful about a gap in the approved toolset, and the ICT Manager and AI Lead should be reviewing usage patterns at least once per semester to close that gap.

Download an image of the “paved road” approach here:

An infographic illustrating the distinction between 'The Paved Road' and 'The Shadow Path' regarding AI use. 'The Paved Road' is described as fast, transparent, and managed, highlighting self-service, fast-track vetting, and safe management through the BMIT process. In contrast, 'The Shadow Path' is characterised as hidden, unmanaged, and unsafe, showcasing unsanctioned tools and unmanaged use. Includes explanatory notes on AI needs and oversight.

The guidelines toolkit

The VINE Guidelines translate these principles into a set of practical tools that schools can lift and adapt directly. The full toolkit is available via the guidelines website, but the headline components are worth naming:

  • Tool 3.1: Risk Zone Quick Reference, a simple decision flow for staff to work out which zone a given AI use falls into, with a default-to-higher-zone rule when in doubt.
  • Tool 3.2: AI Tool Vetting Checklist, adapted from the UK Department for Education’s Generative AI Product Safety Standards (2025), with the questions a school needs to answer before approving a Zone 2 or Zone 3 tool, covering data storage, training use, retention, compliance, and content filtering.
  • Tool 3.3: Shadow AI Audit, Conversation Starters, a set of questions designed to surface shadow AI use in staff meetings, surveys, or one-on-ones, without creating a punitive atmosphere. These are the questions that tend to unlock the conversation when a formal audit would shut it down.
  • Tool 3.4: BMIT Transition Template, the structured four-step process for assessing disclosed shadow AI use against the zone framework and selecting one of four pathways (Permit and Monitor, Permit with Conditions, Pause and Review, or Discontinue).
  • Tool 3.5: AI-Adjacent Policy Audit, a cross-reference checklist for identifying where existing school policies (academic integrity, privacy, digital citizenship, acceptable use, ICT procurement) need updating to reflect AI-specific concerns.

None of these tools are meant to be used in isolation. They sit inside a broader framework that assumes the school is doing the foundational work of designating an AI Lead, investing in differentiated professional learning, and treating governance as a living process rather than a one-off document.

Bringing it out of the dark

Shadow AI is not going to be eliminated by better policy, tighter controls, or more aggressive monitoring; it is going to be surfaced, disclosed, and managed, or it is going to stay in the dark and expose the school to risks it cannot see. Every school leader I have spoken to in the past six months has stories of discovering, after the fact, that a tool was being used in a way that would have failed every question on a vendor vetting checklist. In most cases, the staff member involved had no ill intent. They had a problem that shadow IT appeared to solve: the official path was too slow, and the shadow path worked.

The VINE Guidelines offer a framework that takes this reality seriously and gives schools a way to respond that is proportional, transparent, and workable. The three-zone model, the BMIT transition pathway, the paved road principle, and the accompanying toolkit are all designed to answer the same question: how do we bring shadow AI out of the dark without punishing the people who help us do it?

If you want to read the full guidelines, you can access them at vineguidelines.leonfurze.com, and the launch post from last week has the broader context of the update, including the five themes that emerged from the consultation with member schools. The guidelines are published under a CC BY-NC 4.0 licence so any school, anywhere, can adapt them.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨

Leave a Reply