Expert Signals: Fidelity

round music vinyl records

We’re all familiar with the term “hi-fi”, or high fidelity: a recording that’s faithful to the original performance. You hear the room, the breath, the fingers on strings. The recording is not the performance, but it carries enough of the performance that something real transfers.

Lo-fi is the opposite, or so you’d think. But lo-fi became a genre, and in becoming a genre revealed something interesting about what fidelity really means. The crackle, tape hiss, and warmth of imprecision associated with the genre has seen renewed popularity in recent years, though the term isn’t new, and artists have been using lo-fi recording methods for decades. The imperfections aren’t flaws; they’re traces of a human process. Low fidelity in the technical sense. But high fidelity to something else: the human presence behind the sound.

In January 2025, Liz Pelly reported in Harper’s on what had been happening to this kind of music on Spotify. The platform had been systematically replacing real artists on its mood playlists with what it internally called “Perfect Fit Content”: cheaply produced tracks attributed to pseudonymous artists with fabricated biographies. Lo-fi, ambient, and jazz were the genres targeted first, because they seemed simple. The AI slop industry looked at “low fidelity” and saw “low complexity.” Easy to produce at scale. Easy to fill playlists with. Easy, eventually, to hand over to AI generators like Suno and Udio, which flooded streaming platforms with synthetic music that sounded close enough for background listening.

Lo-fi sounded simple, so the industry assumed it was simple. It mistook the aesthetic for the substance. The crackle wasn’t ornamentation. It was evidence of a life, a process, a human being in a room making choices. Strip that out and replace it with algorithmic approximation and you get something that fills the same functional role—background music for studying, for working, for not-quite-listening—but that carries none of the human residue that made the genre worth caring about.

vinyl records on a wall
Photo by Yunus Kılıç on Pexels.com

The myth of perfect fidelity

There is a persistent fantasy in how we think about communication technologies: the assumption that a medium can eventually become a vanishing mediator, achieving total transparency where noise and distortion are entirely eradicated. AI is the latest and most seductive version of this fantasy. The elevator pitch, implicit in every AI tutoring platform, every AI writing assistant, every AI diagnostic tool, is that the medium has finally become transparent. You put knowledge in one end and get knowledge out the other, perfectly preserved, infinitely scalable, with no degradation.

It’s a fantasy because every medium leaves its fingerprint on the signal that passes through it. This is not a limitation that better technology will solve: it’s a structural feature of transmission itself. The medium is never transparent. It always leaves a mark.

When you read a blog post, or a book, or watch a TED talk, or listen to recorded music, you’re not hearing pure expertise transmitted transparently. You’re hearing the knowledge and skills as shaped by that person, that medium, that moment. That shaping is part of the value. AI strips the resonance out. It produces output that has the uncanny smoothness of a signal that has passed through a system designed to remove all traces of its origin. The medium has not vanished. It has imposed its own character so thoroughly that the original character of the knowledge has been erased.

The fidelity spectrum

I play guitar. Not particularly well, but I’m occasionally willing and able to devote time to idling over the fretboard. For eighteen months, once a week, I took jazz guitar lessons from a teacher over Skype. This was during my PhD, in the middle of a career change, and the lessons were one of the few things in my week that existed purely for the pleasure of learning something difficult.

The teacher, Darryl, was warm, casual, and eminently capable of pacing his lessons. He had an instinct for my energy that I never had to explain. On weeks when I was sharp, he’d push me into new territory: unfamiliar chord voicings, tricky substitutions, a standard I hadn’t heard before. On weeks when I was running on fumes, he’d ease off. We’d work on something I already knew, polish it, go deeper into something familiar rather than forcing something new. He set me up with jazz standards and we’d move back and forth between comping and melodic lines, circle around my favourite artists but introduce new ones, take detours down side alleys that connected things I hadn’t realised were connected. He knew where I was. He knew where I’d been. He knew where to take me next.

I also learned from Justin Sandercoe, aka Justin Guitar, whose structured video courses have taught millions of people to play. Sandercoe is a gifted pedagogue. He’s been making lessons since the early days of YouTube, and his courses are well-paced, clearly explained, and generous with his knowledge. I made real progress with him, especially in technical aspects like chord construction and reading music. But he couldn’t hear me play. He couldn’t adjust to my energy, my confusion, my sticking points. His curriculum was excellent. It was just the same curriculum for everyone.

I also respect John Mayer as a guitarist. During the COVID lockdowns, he recorded a series of social media videos walking through his songs, casual and clearly enjoying the teaching. They were a pleasure to watch. But watching John Mayer explain a song is not the same as learning to play guitar. You get his taste, phrasing, enthusiasm, but you don’t get a path.

Darryl was the highest-fidelity channel, followed by Justin Sandercoe, and then John Mayer. The ordering had nothing to do with who was the most accomplished guitarist. Mayer is, by most measures, a more successful musician than Darryl or Sandercoe. But the fidelity of what reached me, the faithfulness of the transfer to what I needed as a learner, was lowest from the most expert source and highest from the least famous one.

A diagram titled 'The Fidelity Spectrum' illustrating various teaching methods, including one-to-one teaching, structured curriculum, expert broadcast, and AI, with descriptions highlighting key characteristics of each approach.

This is the first non-obvious thing about fidelity: it is a property of the channel, not the source. A brilliant signal through a narrow channel arrives degraded. A good signal through an open channel arrives intact. In audio engineering, this is well understood. A superb recording played through a tin-can speaker sounds terrible. A decent recording played through good monitors sounds fine. The quality of the source is important, but the channel determines what arrives.

Lost in compression

Fidelity, in signal processing, is the faithfulness of a reproduction to its original. A high-fidelity recording captures the warmth and texture of a live performance. A low-fidelity recording captures the melody but loses the room. Both give you the song. Only one gives you the feeling of being there. The same distinction applies to knowledge. When expertise travels from one person to another, through any medium, the question is always: what survived the journey?

And what gets lost isn’t random. In audio, high frequencies are stripped first. Bass survives bad speakers; treble doesn’t. The deep thrum of a bass guitar comes through a phone speaker. The shimmer of a cymbal does not. Low-frequency content is robust. High-frequency content is fragile. The equivalent in knowledge transfer: basics survive low-fidelity channels, but nuance doesn’t. The fundamental concepts, the textbook definitions, the standard procedures; these travel well. They survive compression. But the subtle things, the timing, the contextual judgement, the sense of when to push and when to hold back, the instinct for what a particular learner needs at a particular moment: these are high-frequency content. They are the first casualties of a lossy channel.

Every channel compresses, and the question is always what gets discarded. When Darryl’s guitar instruction was compressed into Justin’s curriculum, what was lost was personalisation: the ability to read my energy, adjust to my level, respond to my specific confusion. The pedagogical structure survived. When Justin’s structured approach was compressed further into Mayer’s Instagram broadcasts, what was lost was the pedagogical design itself: the sequencing, the scaffolding, the sense of what should come before what. Mayer’s artistry survived. His teaching path did not.

A vintage electric guitar with a sunburst finish, featuring a double cutaway body, two humbucker pickups, and four control knobs.

Expert signals

In an earlier post, I mentioned picking up some expert signals from Simon Willison, the software engineer, and how his writing changed my practice. Willison had been writing about what he called AI engineering, as distinct from “vibe coding,” the casual, prompt-and-pray approach that many non-engineers bring to AI coding tools. AI engineering is more disciplined. And the centrepiece of his argument was testing.

If you’re not a software engineer, the word “testing” probably doesn’t quicken your pulse. In software development, it means writing code that checks whether your other code works correctly. Most developers defer it. Willison argued that AI inverts this entirely. When an AI writes the code, you should write the tests first. Define what correct looks like, in the form of tests that will initially fail, then let the AI write the code that makes them pass. Red, then green. One piece of functionality at a time. The mechanics were simple, but the philosophy behind them was anything but.

Willison wasn’t describing a technique. He was describing a shift in the relationship between human and machine: when AI handles execution, the human’s job is no longer to write code. It’s to specify. To define, precisely and testably, what “correct” looks like. The value moves from production to judgement. When I built my website, I did none of this. I prompted the AI, looked at the output, checked that it seemed to work, and moved on. If I’d read Willison before I started, I would have approached the project completely differently. Not because I’d have become a software engineer, but because his insight gave me a transferable principle: define success before you start building, and keep the judgement in human hands.

Could I have learned this from AI? I tried. After reading Willison, I went back to Claude and asked about testing strategies for AI-assisted development. The model gave me a competent overview. It described test-driven development. It explained red-green testing. It listed frameworks. Accurate, thorough, and completely useless to me.

The information was identical. The fidelity was entirely different. The AI delivered a textbook definition. Willison delivered a principle I could live by. The difference was not in the content, but in what survived the channel. Willison’s blog post is a one-to-many medium; he couldn’t see me, couldn’t adjust to my level, didn’t know I existed. In theory, that’s a medium-fidelity channel at best. Closer to Justin Guitar than to Darryl. But the signal landed with extraordinary force.

Because Willison writes with his situated expertise in the prose. The blog post isn’t a textbook explanation. It’s one specific person’s conclusion drawn from decades of specific practice, and that specificity—the residue of a real career, the weight of genuine experience behind the recommendation—is exactly the high-frequency content that most channels strip out. Willison preserves it, because the writer is the channel. When a thoughtful expert writes in their own voice about something they’ve done, the medium doesn’t just transmit the signal. It carries the fingerprint of the person who made it.

AI output has no fingerprint. It has been designed, quite literally, to remove all traces of individual origin. It sounds like everyone and no one. It delivers the content and strips the context. And in the transfer of expertise, the context is where the value lives.

woman kneeling near a toolbox and a motorcycle
Photo by Anastasia Shuraeva on Pexels.com

A library of feels

Philosopher and mechanic Matthew Crawford knows what this looks like in the body. In Shop Class as Soulcraft, Crawford describes the diagnostic process of a motorcycle mechanic. When a bike comes into the shop with a problem, the mechanic doesn’t start with the service manual. He starts by listening. Over years of practice, he has built what Crawford calls “a library of sounds and smells and feels.” The backfire of a too-lean fuel mixture is subtly different from an ignition backfire. Piston slap sounds like loose parts to an untrained ear. The mechanic hears the difference because he has heard a thousand engines, and the knowledge lives not in a manual but in his body.

The service manual exists. The diagnostic tool exists. “The digital multimeter, together with the procedure in the book,” Crawford writes, “present an image of precision and determinacy that is often false.” The procedure asks you to follow steps. But what the steps actually demand is interpretation. Judgement. The kind of knowledge that arises, as Crawford puts it, “only from experience; hunches rather than rules.”

Crawford came to this argument from an unusual direction. He has a PhD in political philosophy from the University of Chicago. Before opening his motorcycle repair shop, he worked as the executive director of a Washington think tank, where his job involved writing abstracts of academic papers at a rate of twenty-eight per day. The work, he later wrote, “required me to actively suppress my own ability to think, because the more you think, the more the inadequacies in your understanding come into focus.” He quit after five months. “I quickly realised,” he wrote, “there was more thinking going on in the bike shop than in my previous job at the think tank.”

This is the fidelity problem in a different register. The think tank produced knowledge that looked authoritative—policy briefs, abstracts, summaries—but that had been systematically stripped of the situated judgement that would have made it useful. The motorcycle shop produced knowledge that looked manual but was saturated with exactly the high-frequency content that the think tank had learned to discard. Crawford’s phrase for what happens when you replace expert judgement with standardised procedures is blunt: “the degradation of work.”

AI is the most sophisticated version of this degradation yet. It takes the mechanic’s library of sounds and smells and feels, the nurse’s instinct for a deteriorating patient, the teacher’s sense of when to push and when to ease off, and compresses all of it into the same smooth procedural surface. The image of precision and determinacy. The service manual that presents itself as the mechanic.

The residue of lived experience

Polymath Michael Polanyi, in his slim and brilliant book The Tacit Dimension, wrote that “we can know more than we can tell.” He meant that much of what experts know resides below the level of conscious articulation. It lives in the body, in instinct, in the half-conscious pattern recognition that experts deploy without being able to explain how. You can ask a master chef what makes her sauce different and she’ll say something vague about timing and heat. The knowledge is real; it’s just not the kind that survives being put into words.

AI compresses everything into the same flat, decontextualised surface. The tacit knowledge, the pedagogical structure, the personal voice, the situated judgement: all of it flattened. What remains is technically accurate and experientially empty. A high-resolution image of the average; a “blurry JPEG of the web”, as author Ted Chiang called it.

But a lossy signal from an expert is still enormously more valuable than a lossless signal that contains no tacit knowledge at all. A JPEG of the Mona Lisa tells you something real about the painting. A technically perfect photograph of a blank canvas tells you nothing. The fidelity of expert transmission is imperfect, because language is always an imperfect medium for tacit knowledge. But it carries the residue of lived experience. It changes how you think, not just what you know.

The question the AI industry is not asking, is not whether AI can deliver information accurately, it’s whether understanding transfers.

Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

← Back

Thank you for your response. ✨

Leave a Reply