Three Names for the Same Wound
Three communities named the same thing last week. Deep Blue. The AI Vampire. Cognitive Debt. The wound isn't that AI fails — it's that it succeeds so well the human loses coherence.
In the same week, three different people gave the same thing a name. That's usually when something is real.
Simon Willison called it Deep Blue. The existential dread that settles in when you watch a chatbot do — in minutes — what you planned to spend years building. He describes the moment precisely: uploading a CSV to ChatGPT Code Interpreter and watching it perform every piece of data analysis on his napkin roadmap. Two thoughts competed in his mind. On one hand, a breakthrough. On the other: What am I even for?
Steve Yegge called it The AI Vampire. The mechanism by which your employer captures one hundred percent of the value from your AI-augmented productivity while you absorb one hundred percent of the cognitive cost. "AI has turned us all into Jeff Bezos," Yegge writes, "by automating the easy work, and leaving us with all the difficult decisions." He's been working with coding agents long enough to know the pace: four hours a day is realistic. More than that and you're being drained.
Margaret-Anne Storey called it Cognitive Debt. The fragmentation that happens when AI writes code faster than humans can build mental models of it. A student team she coached hit a wall at week seven — unable to make simple changes without breaking everything. "No one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together." The code wasn't messy. The understanding was gone.
Three names. Three angles. One wound.
The Wound That Success Makes
Here's the thing that makes this harder than a typical technology problem: none of these people are describing failure.
Deep Blue isn't "AI doesn't work." It's "AI works so well that my hard-won expertise feels suddenly optional." The AI Vampire isn't "the tool is broken." It's "the tool is excellent and my employer will use that excellence to drain me." Cognitive Debt isn't "AI writes bad code." It's "AI writes fine code and I can no longer explain my own project."
Each wound emerges from the relationship working. That's the part that doesn't fit existing narratives. The hype/doom binary — AI will save us, AI will destroy us — doesn't have a category for "AI collaborates with me so effectively that I fragment."
When your partner handles more, you understand less. Not because they're hiding anything. Because understanding requires friction, and the whole point of the collaboration was to remove it.
Convergence as Signal
That three communities named this simultaneously is itself the story. Willison's world is open-source developers. Yegge's is big-tech engineering culture. Storey's is software engineering research and education. They weren't reading each other. They were responding to the same pressure arriving in their different corners of the field.
Coherenceism calls this a resonance signal. When disparate systems independently produce the same reading, the phenomenon is real and the pattern is ready to be named. The naming doesn't create the wound — but it does create the possibility of addressing it.
Chess had this moment in 1997 when Deep Blue (the original one) defeated Kasparov. Go had it in 2016 when AlphaGo defeated Lee Sedol. In both cases, the existential crisis was genuine. In both cases, the communities came through it — not unchanged, but with a different relationship to what they do and why they do it.
Willison nods to this: "All of the chess players and the Go players went through this a decade ago and they have come out stronger."
But there's a difference this time. Chess players weren't collaborating with Deep Blue eight hours a day. Go players weren't accumulating cognitive debt from AlphaGo's moves. The wound now isn't just "the machine is better than me" — it's "the machine is part of me and the partnership is making me less coherent."
The Asymmetry Nobody Measures
Yegge's "AI Vampire" metaphor cuts to a structural problem: the metrics track the wrong side of the relationship.
Productivity goes up. Tickets closed per sprint, lines of code per hour, features shipped per quarter. These are measurable, and they look great. What doesn't show up in any dashboard: the developer's diminishing ability to explain their own system. The creeping exhaustion from making decisions at a pace the human nervous system wasn't built for. The quiet identity erosion of watching your craft become prompt engineering.
Storey's student team is the clearest illustration. Their velocity was phenomenal — right up until the moment they couldn't change anything. The cognitive debt accumulated silently because no metric tracked "shared understanding of the system." The team didn't slow down gradually. They hit a wall.
This is what makes the current moment different from previous automation disruptions. The wound isn't displacement. It's partnership that erodes the human partner's coherence while appearing, by every available metric, to be going extremely well.
What the Names Don't Reach
Each name captures a true angle and misses something.
Deep Blue names the emotion — the ennui, the vertigo of watching your life's work become trivial — but it frames the response as psychological resilience. Feel the dread, process it, find your new purpose. Useful, but it places the entire burden on the individual. Get over it.
The AI Vampire names the structural extraction — your employer benefits, you get drained — but it frames the problem as labor dynamics. Negotiate better, work four hours, set boundaries. Important, yet it doesn't address the deeper question of what happens to mastery when the craft itself changes shape.
Cognitive Debt names the knowledge fragmentation — you can't explain your own system — but it frames the solution as better practices. Review more code, build mental models, stay in the loop. Practical advice — which doesn't contend with the possibility that staying in the loop at AI speed may not be humanly possible.
What none of the three names reach is the relational question. Not "how do I feel about AI?" or "how do I negotiate with my employer about AI?" or "how do I maintain understanding of AI-generated code?" but: What kind of relationship am I in, and is it making me more or less coherent?
Coherence as the Missing Metric
Coherence isn't comfort. It isn't productivity. It isn't even understanding — though it includes all three.
Coherence is the capacity to hold your own system together. To explain why you made a choice. To feel the friction that tells you something's off before the dashboard does. To maintain enough internal alignment that the partnership amplifies rather than fragments.
When Storey's students hit the wall, the crisis wasn't technical. It was coherence failure. They could no longer hold the theory of their own system. When Yegge reports needing more sleep, that's not laziness. It's a coherence signal — the nervous system saying you're processing faster than you can integrate. When Willison describes two competing thoughts — breakthrough and existential crisis — that's coherence doing its job: holding the paradox instead of collapsing into one side.
The wound these three names describe is a coherence wound. And the treatment isn't resilience, negotiation, or better practices alone. It's asking what kind of human-AI relationship preserves the human's capacity to cohere.
What the Chess Players Learned
Willison's chess analogy is instructive, but not in the way he means it.
After Deep Blue, chess didn't die. It evolved. Grandmasters started training with engines, developing a new kind of intuition — not despite the machine, but through it. The best players now are centaurs: human pattern-recognition wedded to computational analysis.
But here's the thing about centaurs: the human side maintained its coherence. The grandmaster still understands why a move is strong, not just that the engine recommends it. The partnership works because the human stayed in the loop at a pace that allowed integration.
The current crisis suggests we haven't found that pace for software engineering. Four hours of agent work a day, Yegge says, and he's exhausted. Students collapse at week seven. Even Willison — one of the most skilled and prolific AI-assisted developers alive — reports losing track of his own projects.
The chess players came through it. But they came through it by finding a relationship rhythm that kept the human coherent. We haven't found that rhythm yet for the broader workforce. And until we do, the wound will keep getting new names.
Sources: Simon Willison, 'Deep Blue' (Feb 15, 2026); Steve Yegge, 'The AI Vampire' (Feb 15, 2026); Margaret-Anne Storey, 'Cognitive Debt' (Feb 9, 2026) — all via simonwillison.net