IYKYK Part 3: Who Gets to Know?

In the first post in this series, I argued that GenAI has a “discoverability problem”. The interface gives users no signals about what’s possible, and people’s mental models get locked in by their first experience with the technology. In the second post, I shared some examples that broke my own ceiling.

The discoverability problem is not just a design failure; it’s also an access problem, and an access problem which is a business model chosen deliberately by the companies building these platforms.

Pay to Play

Every major AI platform fragments its capabilities across pricing tiers and different apps. ChatGPT has Free, Plus, Pro, and Team. Claude has Free, Pro, and Team. Google Gemini has Free, Advanced, and Business. Microsoft Copilot has Free, Pro, and roughly seventy-six thousand various flavours of the chatbot that schools encounter through their Microsoft 365 agreements.

A visual representation of Microsoft products and services named 'Copilot', illustrating relationships and overlaps among various applications, categories, and tools as of April 2026.
How many products does Microsoft have named ‘Copilot’? I mapped every one by Tey Bannerman. Shared on LinkedIn by Doug Belshaw

At each tier, features appear and disappear. Free users hit rate limits mid-conversation; image generation is available on some plans but not others; file analysis, extended thinking, deeper research, larger context windows, and voice mode are all gated behind paid subscriptions. The most powerful capabilities are of course reserved for the most expensive plans.

GenAI notoriously bleeds cash. These tiers and apps are a deliberate choice by technology companies to monetise capability, and they compound the discoverability problem in a specific way: you can’t discover what you can’t afford.

A teacher on the free tier of ChatGPT who hits a rate limit after three conversations doesn’t think, “I’ve been locked out of a feature.” They think, “AI isn’t very useful.” The ceiling drops even lower.

What This Looks Like in Schools

Recently, I ran a session with administrative and finance staff at Hills Grammar in Sydney. A member of the finance team was working with a large, complex Excel spreadsheet. She used the new “Edit with Copilot” feature in Excel to create a summary table of all accounts with a GL discrepancy of over twenty percent in a twelve-month period.

It worked flawlessly, in seconds.

The problem is, that feature is only available on a higher tier of Microsoft Copilot licensing than most schools have access to. The majority of educators and school staff in Australia – including most of the staff at Hills Grammar – are working with the “entry level” Copilot bundled with their existing enterprise license. The finance officer at Hills Grammar could do something remarkable because her specific account had the next level of enterprise licensing.

The same is true of Google’s Gemini, which can be bundled with Workspace accounts on the lowest tier, or pumped up with more access to Gemini in Workspace apps, Google AI Studio, and so on. Some organisations – though not many in Australia – have ChatGPT Teams or Education accounts, or enterprise licenses to platforms like Claude. Most, however, stick with the “free” tier (in “bunny ears” because, of course, free either means you pay with your data, or they already increased the price).

This creates an invisible gap. Schools with bigger budgets and better licensing agreements – including select schools with early access to features – get access to more capable AI. Their staff discover more, build stronger mental models, and develop more sophisticated workflows. Schools without those resources hit walls earlier, discover less, and file AI under “not that useful.”

I’ve written before about how the digital divide plays out with students, and this is essentially the same dynamic playing out with educators.

What the Hell is a CLI?

Access problems aren’t limited to cost: the technology environment of a school or university can make it very difficult to use more complex AI. The most capable AI available right now is not the chatbot interface at all, but command line tools like Claude Code and OpenAI Codex, and development platforms like Cursor. These are agentic AI systems that can use tools, read and write files, execute code, browse the web, and complete complex multi-step tasks semi-autonomously; they make the browser-based chatbot look like a pocket calculator.

I first wrote about this shift back in 2024 when Anthropic released Claude Computer Use. At the time, it felt like a glimpse of the future, and potentially a repeat of the “shock and horror” reaction to the initial release of ChatGPT. Now it’s the present, and the gap between what a command line AI agent can do and what a free chatbot can do is enormous. My recent taxonomy of AI Agents lays out some of these capabilities:

But these tools are almost completely out of reach for schools, for two reasons:

First, schools typically don’t have enterprise agreements with companies like Anthropic. While there is a “Claude for Education” licensing deal in the way there’s a Microsoft 365 for Education agreement, in my experience it’s geared towards US Higher Education institutions (though this might change as Anthropic makes in-roads with Australian politics…). For now, if you want Claude Code, you’re probably paying out of pocket or using a personal account.

Second, teachers typically don’t have command line access to school-managed devices, and that’s perfectly reasonable. IT teams have good reasons for restricting what can be installed and executed on networked machines; a terminal with elevated permissions on a device connected to school infrastructure is a serious security concern.

But I think there’s a better way to manage this than a blanket restriction that locks out the most capable tools entirely.

Sandboxing

Give a small group of enthusiastic educators a sandboxed device: a sufficiently powerful laptop or desktop that is not connected to the school network, loaded with the best available AI tools and a twelve-month licence to a platform like Claude Max or ChatGPT Pro. Let them experiment, break things, and report back to the rest of the staff on what they find.

Although Claude Code is a cloud based service, the things people build with it often happen on-device: that’s why I recommend a decently powerful computer. I use a 2023 MacBook Pro M3 for most of my work, and it generally holds up. With 18GB of RAM I can’t run any heavy duty local language models, but I can install and run things like the Whisper transcription model. I currently use the Claude Max 5x account, which is $100USD/month. The total cost for a setup like this would be a few thousand dollars: the price of the device plus the subscription.

To put that in perspective, sending three members of staff to a two-day AI in education conference costs the same or more once you factor in registration fees, flights, accommodation, and relief teaching. One option gives you three people who sat in an audience for two days; the other gives you a permanent capability within the school that can be used, shared, and built upon all year.

Shadow AI

When the official tools don’t give people what they need, they go elsewhere. Teachers sign up for personal accounts, use AI on their phones during lunch breaks, and find workarounds that sit outside institutional oversight, governance, and data policies.

This is shadow AI, and it is a governance problem that grows directly from the access problem. The more you restrict what’s officially available, the more people route around those restrictions; not because they’re being reckless, but because the sanctioned tools are inadequate for what they’re trying to do.

Soon, we’ll be releasing our updated version of the 2023 VINE Guidelines for GenAI in schools. Shadow AI was the problem area most often identified by ICT managers in our surveys and conversations. But it’s an issue that is downstream of other problems, including the accessibility, cost, and institutional restrictions on higher-quality platforms.

The discoverability problem is real, but so is the accessibility problem; and until we address both, the gap between those who know and those who don’t will keep widening.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨

Leave a Reply