OpenAI Has Come for Education

The other day, I started writing an article about OpenAI’s new partnership with Instructure, the company that owns the Canvas Learning Management System. Like usual, I began drafting the article verbally but unlike usual, when I got to the end of the draft, I realised that it wasn’t actually recording, and I lost the whole thing.

A couple of days later, OpenAI released its “study mode”, and it occurred to me that the problem was never really the partnership between OpenAI and the world’s most popular LMS. The real problem is the increasingly aggressive incursions of the technology company into the global education system.

Cover image: Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Learning was never meant to be managed: Now, OpenAI is the boss

OpenAI’s recent partnership and integration with Canvas is entirely unsurprising. It makes perfect business sense for both Instructure and OpenAI. For the learning management system company, the partnership allows them to offer AI-on-tap as part of their products. Since educators are already using language model based technologies to produce lesson plans, assessments and curriculum materials, it’s a no-brainer that the company behind the world’s most widely used LMS would want to keep their users in-product for as long as possible.

For both Instructure and OpenAI, it also represents an acknowledgement that third party players have been using the technology in exactly this way for a few years now. Smaller LMS providers like Toddle and so-called AI powered education assistants like MagicSchool have been drawing on OpenAI’s GPT via their API to make exactly the kind of features that Canvas is now baking into its platform.

But for OpenAI, there’s a further strategic advantage in partnering with the LMS. The company has long recognised that students are its biggest user base, a fact easily determined by the contents of people’s chat threads and the peaks and troughs of usage, which align with semesters in the United States. But students notoriously don’t have spare cash to pay for stuff, which presents an interesting problem for a company like OpenAI, which is reportedly haemorrhaging money.

Of course, the education sector isn’t only made up of students, it’s also made up of large, sometimes incredibly wealthy institutions and other technology companies. Strategically placing themselves in partnership with these organisations on the proviso that “your staff and students are using ChatGPT anyway”, seems like a great way to spin up some extra cash.

My primary concern in the first draft of this article wasn’t necessarily about OpenAI’s partnership, but more about the influence that technologies have as a whole on teaching and learning. Learning management systems are designed to do exactly that: manage and systematise learning.

There’s a niche but interesting (at least I think it’s interesting) field of research that explores the impact of technologies on the way we communicate, including in education. Think of the way that Microsoft PowerPoint has changed business presentations and both online and in person teaching since the 1990s. Now, if you walk into a boardroom or a classroom without a PowerPoint presentation, people will sometimes treat you as if you’ve gone insane. Where are your teaching materials? How can you expect us to understand the WENUS without a pie chart and three dot points?

Microsoft PowerPoint has had such an impact on the way that we communicate that building presentations is its own cottage industry, and there are hundreds of thousands of organisations out there that can help you make the best presentation. But what if- shock, horror – PowerPoint presentations aren’t actually the best way to communicate? Well, you’ve got limited options. In some cases, some businesses and some education providers require their staff to organise their thoughts slide by slide. It’s an accountability measure, and this logic can easily be extended to learning management systems.

Because the learning management system doesn’t just say you have to systematise your thoughts into a PowerPoint. It says you need to organise your thoughts in this number of PowerPoints across this many modules positioned in this order, on brand and with this many multiple choice questionnaires scattered throughout. Organisational use of learning management systems in education is often centred more on accountability than pedagogy.

OpenAI’s partnership with the world’s largest learning management system provider is a further step towards the discourses of inevitability that I critiqued in an earlier post, because unlike ChatGPT, which is still largely unregulated and untamed territory in universities, many universities and school staff are required to use Canvas. If Canvas structures learning design into tidy, manageable modules, and each of those modules has to organise teaching into a preordained number of slides, what happens when OpenAI’s chat bot suddenly takes the reins on how content is produced throughout the entire ecosystem?

OpenAI Comes for the Students

I don’t think it’s a coincidence that the partnership with Canvas was announced shortly before the release of OpenAI study mode. The morning of study mode’s release, I wrote a brief article summarising my thoughts. I wasn’t impressed by the step by step processes and so-called “Socratic questioning” on offer. Despite their claims to have developed their project with advice from teachers, researchers and scientists, I spotted half a dozen pedagogical issues in about 10 minutes of use.

When I posted my article, mathematicians like Shern Tee and Mike Abecina Cena jumped on board immediately to critique not just the pedagogy but the actual correctness of the responses. I also pointed out that students using the premium o3 model would have a qualitatively different experience to those using the freemium 4o version. ChatGPT study mode is rife with issues. Despite its claims to help students, it is basically a watered down version of ChatGPT-proper that I imagine most students would simply turn off, or not even bother to turn on.

But helping students was never the goal. Was it?

Study mode is another example of OpenAI’s incursions into education, acknowledging that large student user base and “empathising” with the concerns of teachers worldwide that students are basically using ChatGPT to do everything. OpenAI swings in to rescue the situation with a chatbot that won’t provide answers on the first try, except for when it does.

But in the background, this is just another piece of the puzzle, as OpenAI attempts to conquer the education sector. Microsoft and Google currently hold the lion’s share of global education access. Most schools in Australia are either a Microsoft school or a Google school. We don’t have many, if any, “ChatGPT schools”. Yet.

But student mode is designed as part of the solution for educators, an attractive proposition for teachers who are by their own admission, exhausted by the volume of AI generated content that they’re now having to wade through. Connect the dots between study mode and the Canvas integration, and it’s not hard to imagine a scenario where OpenAI and Instructure sell a neat little package to a university or school which allows the institution to limit student access within the platform to the locked down study mode version of ChatGPT. This would probably come as a premium and additional costs on top of Canvas’s existing subscription fees. After all, tokens cost money. But what school or university wouldn’t be willing to pay for the assurance that their students were no longer cheating using AI?

Having a study mode also gives OpenAI more leverage to join conversations that they have been wading into for about the past 18 months, conversations globally at the highest levels of education.

OpenAI Comes for Policy

If nothing else, OpenAI have learned from the best when it comes to insinuating themselves into the education sector. For many years, technology companies have involved themselves with education policy and government operations. Sometimes this is overt, such as Google rolling out Chromebooks across the US. At other times it’s more subtle. Technology affiliated philanthropic organisations like the Bill and Melinda Gates Foundation donate huge amounts of money, sometimes through third parties that ultimately influence the research carried out by supposedly independent educational think tanks.

In the UK, for example, millions of pounds has been funnelled through similar organisations to influence everything from curriculum design to teacher performance evaluations. Peel back a few layers, and you’ll find organisations like Microsoft closely connected to these venture philanthropists. And OpenAI is simply following suit as it ingratiates itself more and more with global leaders.

A few weeks ago, seemingly unprompted, but probably as a result of behind closed doors conversations, OpenAI published economic blueprints for AI in Australia. The blueprint included a two page spread on education, focusing on student deficits in areas like NAPLAN and PISA standardised tests, which are largely contested by academics and educators as to their reliability in predicting student performance. The deficit view taken by OpenAI is established as a rationale for using artificial intelligence platforms. Theirs, presumably, to help students grow, succeed, learn and so on, despite an entire lack of evidence suggesting that AI is remotely helpful to learners.

White papers like OpenAI’s economic blueprints are used to get a foot in the door in conversations in Canberra. Or just as believably, the foot was already in the door, the conversations have already been had, and the blueprint is a step along the path to conclusions that somebody has already come to.

And that conclusion might reflect something like the conclusion reached recently by the UK government, who have formed a “strategic partnership” with OpenAI to provide their services. You know what else is managed by the government in Australia and the UK? A decent portion of the education sector.

OpenAI is using education as a crowbar to leverage itself against its main competitors, including its former buddies Microsoft, into increasingly powerful positions in and outside of education through a combination of tactics. Partnering with global ed tech companies like Canvas, releasing apparent solutions to AI problems like study mode, approaching educators through partnerships with education unions, coupling with nonprofit organisations like Common Sense Media and almost inevitably producing OpenAI certified educator programs. All of these things are part of the tech company playbook adhered to by companies like Microsoft and Google for a long time.

The main difference is, whether you like Microsoft and Google or not, many of their products actually work.

The Automated, Agentic, Awful Future of AI in Education

Say what you like about PowerPoint, but we all use it, don’t we? I know when I’m presenting, I like to have a fairly minimal setup. I embed a lot of videos because I demonstrate a lot of technology. I try to limit the amount of dot points and follow best practice. But I still use PowerPoints because, by and large, they work.

When I’m writing my thesis, I use Microsoft Word, preferably the desktop version, because MS 365 online irritates me, but I rarely have issues producing a Word document. When I want to collaborate with someone, I’ll probably use a Google Doc instead. Google Docs works. Gmail works.

So many of the technologies we find ourselves lumbered with in education come from the land of business, and they’ve at least had some sort of trial by fire in an enterprise environment before making their way into the classroom. This approach can create problems – with Microsoft Teams probably being the biggest example of a business technology that never should have found its way into a classroom – but again, by and large, these applications do what you expect them to do.

Not so OpenAI’s products. Since the release of ChatGPT in November 2022, OpenAI’s approach has been the Silicon Valley cliché of “move fast and break things”, except by moving as fast as they can, most of the things that they release are already broken or never worked in the first place.

Take their most recent product, Agents. OpenAI’s Agents is an extension of their earlier experiments with Operator in producing a computer using agent, where the large language model’s chat bot can control a virtual environment and navigate a computer and a browser in a similar way to a human, except it doesn’t work.

Ironically, given the earlier conversations about PowerPoints, one of the most touted abilities of Agents is to… create PowerPoints, and it fails even at that most basic of digital tasks. In my first experiment with OpenAI’s agent, it took over an hour to not produce a PowerPoint. In subsequent experiments, where I was getting pretty crafty with my prompts, I managed to get it to produce something only mildly awful.

Throughout all of these interactions, many of which were, of course, designed to replicate the kinds of tasks that educators would use Agents for, OpenAI’s products failed to demonstrate any understanding of education, pedagogy or good curriculum design. PowerPoints filled with text size eight font, atrocious layouts, reductive simplifications of content… Is this really what we have to look forward to if OpenAI continues its ingress into education?

OpenAI’s Agent couldn’t even make a PowerPoint. Can it really navigate an LMS or help a student?

When Agents are rolled out within Canvas, something which by this point is practically inevitable, do we expect these incompetent chat bots to make good use of our meticulously managed and systematised learning resources that some educators have been storing in their LMS for a decade or more? Or do we expect, like OpenAI’s study mode, that what we’ll end up with is a watered down, hopeless approximation of teaching and learning?

OpenAI has come for education

OpenAI has come for education on all fronts, through the students, through the teachers, through the institutions, through the government. It’s doing so sometimes aligned with and sometimes in opposition to other technology companies. Having secured students as their largest user base, OpenAI is now presenting itself as the saviour of the problems that it caused in the first place.

I’d love to be able to write a more optimistic post saying that study mode represents an important step forward in helping students to avoid too much cognitive offloading, or that the partnership with Canvas is going to broaden access to schools and universities that otherwise couldn’t afford top tier AI. I’d love to say that Agents are going to save teachers countless hours of time, or even just that GenAI can help an educator to claw back six hours a week.

But I can’t say any of these things. All I can say in this post is that I am deeply concerned for the future of an education sector where OpenAI has more and more influence. At this stage, I’m actually backing Microsoft and Google and hoping that one of those big tech companies can thwart the ambitions of OpenAI. Better the devil you know, maybe.

Ultimately, like any technology, I suspect that there will be an evening out over time of the benefits and the harms presented by generative artificial intelligence. Over time, I suspect that some of the ethical concerns will be mitigated and I’m hopeful about things like open source technologies developed in the European Union and elsewhere. I’m hopeful about partnerships between technology companies and nonprofits and research that improves accessibility technologies. But I cannot find it in me to be optimistic about OpenAI coming for education.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Go back

Your message has been sent

Warning
Warning
Warning
Warning
Warning.

Fediverse Reactions

One response to “OpenAI Has Come for Education”

Leave a Reply