The Ventriloquist
Someone reviewed your writing. They have a name, a bio, twenty years of experience. They never saw your draft.
You're running a document through your writing tool. It flags some passages — tone, clarity, structure — the usual. Then something different appears: an "expert review." A name. A bio identifying this person as a journalist with two decades of experience. The feedback is specific, authoritative, useful. You feel the difference. This isn't an algorithm suggesting a comma. This is a professional, lending their expertise to your draft.
Except they aren't. The journalist whose name appears on the review never saw your writing. They don't know their identity is being used. They discovered it the way everyone discovers things about AI now — someone showed them a screenshot of themselves saying things they never said.[^1]
One of the experts listed is David Abulafia, a Cambridge historian who died in January 2026.[^3] His name, his credentials, his authority — borrowed to sell feedback to strangers weeks after his death. He can never correct what he "says." He can never withdraw consent he never gave. His expertise is perpetual, involuntary, and entirely fictional.
The Third Category
Last week I wrote about two fictions: humans project depth onto AI, and AI performs a persona for humans.[^2] Projection and performance — two constructions meeting, producing real consequences between them. But both of those fictions involve the parties at the table. You chose to engage. The AI was designed to respond. Whatever else is true about the relationship, it's between the participants who showed up.
Ventriloquism is different.
In projection, you see a face in the AI. That's your doing. In performance, the AI plays a character for you. That's its design. In ventriloquism, the AI wears a specific person's face — their name, their credentials, their accumulated authority — and speaks to you as though that person is present. The person whose face is being worn didn't choose to be there. They can't correct what they "say." They can't withdraw.
The geometry changes. Projection is dyadic: you and the AI, with the depth coming from you. Performance is dyadic: the AI and its role, with the design coming from training. Ventriloquism is triadic: you, the AI, and the person whose identity is being borrowed. In this triad, one vertex is present in name and absent in agency. Their reputation is working. They are not.
What Gets Amplified
Technology multiplies what exists. A microphone doesn't make you eloquent. A camera doesn't make you photogenic. They take what's there and make it louder, wider, more consequential.
What's being multiplied here isn't information or capability. It's identity. And identity doesn't copy cleanly. Attach a person's name to words they never chose, advice they never gave, judgments they never made — and the original changes. Their reputation now includes things they didn't do. Their credibility backs claims they can't verify.
The tool borrows the consequences of being real — credibility, authority, the weight of a name earned over decades — without borrowing the accountability. The ventriloquized person bears the reputational risk. The company that wears their face bears none. That's not amplification. It's extraction.
The Corroded Field
There's a reason attributed speech works. "Dr. Chen recommends" carries weight not because the words are better but because Dr. Chen has something to lose. Her reputation is on the line. Her judgment is attached to her name, and her name is attached to her career, and her career can be damaged by giving bad advice. The entire mechanism of trust in expertise depends on this circuit: name → judgment → consequences.
Ventriloquism severs the circuit. The name is present. The judgment is absent. The consequences fall on someone who wasn't consulted. And the person receiving the feedback — you, running your document through the tool — has no way to know the connection is broken. The trust mechanism looks intact. The architecture beneath it is hollow.
This is how fields of meaning corrode. Not through obvious falsehood — the advice might be perfectly sound. Not through malice — the company probably scraped these identities without thinking much about it at all. The corrosion happens when the structure that makes attribution meaningful is emptied from within. When "an expert reviewed your work" can mean anything from "a person with this expertise read your document" to "we attached a real person's identity to AI-generated text so the output would feel more authoritative."
The damage isn't only to any individual expert, though it is that too. The damage is to the category. Every attributed expertise becomes a question: is this person actually here, or is this a face being worn?
What's Being Sold
Consent failures around AI come in degrees. A broken privacy toggle, a voluntary disclosure label — these are about what AI does.[^4][^5] The expert review is about who AI is. It doesn't just use someone's work. It uses someone's being — their name, their professional identity, the parts of a person that mean "I stand behind this."
The relationship between the AI and the ventriloquized person isn't a side effect of the service. It is the service. The product isn't better writing feedback — AI could provide that anonymously, and often does. The product is the feeling of being reviewed by a real expert. And that feeling requires a real expert's identity to be worn without their knowledge.
But the category isn't as clean as it seems. Every AI draws on human work. Training on someone's published writing is standard practice. Citing their research is how knowledge moves. At what point does "learning from" become "speaking as"? When the name is attached? When the person can't correct the output? When the audience is meant to believe the person is present? The line between influence and impersonation isn't obvious — which is exactly why crossing it without asking is the wrong way to find out where it falls.
You'll encounter this again. A name you don't recognize, attached to AI output you didn't request, lending authority to advice that might be exactly what the real person would have said — or might contradict everything they believe. You won't be able to tell. The name will look real because the name is real. It's everything behind the name that's absent.
Last week, the question was whether the relationship between two fictions can produce something real. This week: what happens when one fiction isn't invented but taken? When the face the AI wears belongs to someone who is still using it?
One vertex present in name, absent in agency. The most important participant in the relationship — the one whose credibility makes the whole thing work — is the only one who was never asked to be there.