“Personalised learning” is one of those educational terms used so often that it is hard to pin down exactly what it means. It has been applied to everything from one-to-one in-person tutoring to online project-based learning and, of course, AI platforms. Many schools use the term, and the intentions behind it are valid: students do deserve personal, individualised instruction rather than a bland, standardised education.
Most recently though, the rise of large language model-based technologies like ChatGPT has lit a fire under both EdTech and Big Tech, causing them to spout unfounded claims such as the notion of AI “democratizing higher education and creating ‘lifelong learners’.” The term “personalised learning” has been well and truly co-opted.
Through the lens of artificial intelligence, personalised learning can be defined roughly as:
One-to-one tutoring and adaptive learning pathways developed for individual students based on their needs and derived from analytic data.
Personalised learning is framed in tech circles as the logical conclusion of many different types of technology, including everything from generative artificial intelligence such as ChatGPT to predictive algorithms, big data analytics, and even wearable technologies and biometrics. Personalised learning, however, is a front, a thin veneer over the true goal of most of these technologies: surveillance and data capture.
Your Personal Constellation
Picture the night sky seen from the paddock behind my house. I live in regional Victoria, Australia, and I’m lucky to look out on the sky free from light pollution. Here, I can see broad swathes of the Milky Way, millions of stars, some arranged in familiar constellations, others scattered with apparent randomness across the blackness.

It’s a fitting analogy for what these tech companies’ systems “see” when applied to our data or our students. The night sky of the Big Tech data capture is nowhere near as clear as my rural Australian view. Data captured by platforms like Google Workspace, Microsoft 365, or even ubiquitous social media like Facebook and TikTok could only fill a small portion of that sky.
The familiar constellations are shapes marked by proprietary data ownership: in this corner of the sky, the data owned by Meta; over here, everything that Google knows about you. And the sky is occluded in parts by looming, inconvenient clouds like the European Union’s GDPR and other pesky regulations that stand in between you, your data, and the companies that want to collect it.
Still, despite corporate competition and the varying efforts of international governments, the sky is bright with the stars of your data. These companies know more about you than you think, and almost certainly more about you than you’d like. The same already applies to our students, and the personalised learning narrative is set to hand over infinitely more data.

The Practical AI Strategies online course is available now! Over 4 hours of content split into 10-20 minute lessons, covering 6 key areas of Generative AI. You’ll learn how GenAI works, how to prompt text, image, and other models, and the ethical implications of this complex technology. You will also learn how to adapt education and assessment practices to deal with GenAI. This course has been designed for K-12 and Higher Education, and is available now.
What It Takes to Personalise Learning
In every technological advance, there’s a trade-off—to move forward, we give something up. It seems that to move forward with the rhetoric of personalised learning, all we need to give up is everything we are. Sound dramatic? Here are a few ways technology companies are already collecting data under the guise of personalised or adaptive learning.
Let’s start with the most tame and perhaps the most obvious, and work our way up through extremes:
- Grades and feedback collected through Learning Management Systems
- Report comments, self-assessments, and reflections passed along to language models for semantic understanding
- Completion times, examination attempts and repeats, and other numerical data which indicates how long a student has worked on a task
- Artificial intelligence-powered activity trackers
- Screen monitoring
- Keyboard capture
- Mouse movement clicks
- Wait times (how long a student hovers over a button before they click or how long they hover over the submit before they commit the answer)
- Chat threads and entire histories of conversations with artificial intelligence tutors
- Emails and communications, personal and otherwise, professional and personal
- Eye tracking
- Voice pitch, tenor, tone
- Body language
- Facial recognition
- Breathing patterns
- Heart rate
- Pulse
- Inferred emotions
If you could gather all of this data on a student, you could attempt to personalise their learning pathway. You could use their biometric information, such as pulse combined with wait time and eye tracking to indicate where a student falters and where they might need some artificial intelligence-assisted support.
Imagine this: you, a student tense under the pressure of assessment, grinding your teeth and hovering for a moment too long over the text input box. The AI-powered ghost of Clippy pops up and asks if you need any assistance. Perhaps it reminds you of a solution to a similar problem you worked on last week. Perhaps it comments on what you had for breakfast.
Or how about the combination of big data, including personal browsing habits, internet history, and social media, alongside a standard curriculum being monitored through an AI-powered Learning Management System—course content created on demand based on personal preferences and interests, perhaps based on gender, inferred sexuality, or emotions. A personalised learning system that capable could provide the educators (overseers, managers, authorities) with real time updates on your inferred mental state.
If this sounds like the stuff of science fiction or dystopia, you’re not wrong. It often feels as though tech company CEOs read a little too much Philip K. Dick in their youth and skipped over the irony and social commentary, reading it instead as an instruction manual for the future.

Whose Learning Is It Anyway?
Influential technology figures such as Bill Gates, Sam Altman, and Andrej Karpathy all seem to share a vision of learning and education, and it is one in which a student sits in front of a device and receives on-demand education wherever and whenever they desire it, in a format most suited to them as decided by the data and the algorithms.
Personally, I struggled through school. I was bored, restless, antagonistic, frequently kicked out of classes, and completed most of my work either in a hurry or not at all. I fell into a university course in English and American Literature, slept through the majority of it, and woke up at the end to write a dissertation on a subject that I was personally interested in. Luckily, one other lecturer in the university agreed. From there, with a degree in English and American Literature in hand, I did what any sensible English student does and got a teaching degree (because what the hell else do you do with an English qualification?).
During my teaching career, I completed a Master’s in education. As someone who didn’t really like school, I’ve certainly spent a lot of my life there, and I’m now back at a university studying my PhD. But at every step along the learning journey, I’ve gotten closer and closer to my own preferred way of learning. As a PhD student, I’m pretty much free to learn however I like:
- Hyper-focusing on journal articles for eight hours at a time
- Manically scrubbing through audiobooks and podcasts on 1.7 speed
- Putting off writing and then smashing through frantic 12-hour-long shifts, hammering out words on a computer or dictating them for later transcription
- Supplementing gaps in my knowledge, such as the intricacies of machine learning, with online courses.
This is actually quite close to the vision espoused by people like OpenAI co-founder Andrej Karpathy who envisions a future where education is “supported, leveraged and scaled with an AI Teaching Assistant who is optimized to help guide the students“.
I want to make it clear that although I’m critical of Sam Altman, Bill Gates, and Andrej Karpathy’s vision of personalised learning, I personally learn best in the exact ways these people describe. I’m not social. I like to hone in on the topic and focus intently, pulling information from many different sources at once, self-directed, and frankly, most of my life I have been antagonistic towards teachers and anyone else trying to tell me how to do things, ironically, considering my career.
But because I spent my career at the front of a classroom, I can imagine what learning is like for people who aren’t me, and this seems to be something that escapes these CEOs.
Unlike Altman, Karpathy, and the others, I spent half my life working with diverse young people. In contrast, the industry that they spent most of their careers in is notorious for its race and gender homogeneity. On the other hand, while the technology industry lacks diversity in gender and race, recent reports suggest an abundance of neurodivergent workers in the field. Like me, these tech CEOs and their colleagues do not represent the majority of learners, let alone the global population of students.
Another difference between me and them is that whilst this blog is popular enough (thanks!), I don’t hold the keys to global education movements. I don’t run a company which is funded in large part by billions of dollars of school and university technology licenses. I don’t have a direct pipeline to billions of learners across the world. I’m not shoving my personal form of “stare-at-a-computer-for-hours” personalised learning down anyone’s throat.

How Do You Solve a Problem Like Personalised Learning?
We need to recognise that data, however vast, present an imperfect picture of an individual’s world. We need to recognise that what scales isn’t always what works, and what works for one doesn’t work for eight billion. We need to separate “end users” from learners and to disregard the positivist idea that if we can just get our hands on enough data, we can understand everything we need about how a person works.
We also need to recognise that education is messy, and that teachers, who are far from perfect, are at least human; capable of connecting with students who don’t learn best glued to a computer screen in isolation, madly fast-forwarding through digital content as they try to absorb the entire internet. Because for the most part, people do not learn like machines. Providing them with more data or extracting as much data from them as possible will not yield optimal results. You can’t maximise humans for efficiency.
Personalised learning, as the grand technologists describe it, is a myth, and although these technologies will no doubt be useful for many learners across the world in all kinds of contexts, it is not the panacea we have been promised.
We need new language to call out this myth of personalised learning: our words have been co-opted into glossy marketing terms. The “personal” has been stripped from personalised learning, so what should we call it instead? Algorithmic instruction? Not another AI… How about Data Extracted Virtual Instruction and Learning (DEVIL… as in, “deal with the…”). Or maybe we just call it what it is: impersonal, automated instruction driven by data mining and algorithmic decision-making. Sorry, I don’t have a snappy catch-phrase for that.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply