Competent Is the Worst Thing a Manuscript Can Be
I had written 40% of a competent manuscript. It read like a book. It smelled like a book. If you squinted, it was book shaped.
But competent is the worst thing a manuscript can be. Bad manuscripts have the decency to announce themselves. You read three pages and know something is broken. Competent manuscripts pass every surface-level check while quietly suffocating the reader with prose that could have been written by anyone about anything.
I tried the obvious AI writing approach. I fed Claude samples of my natural voice — blog posts where I sounded like myself. I wrote detailed prompts: “Match this tone. Be self-deprecating but precise. Use specific numbers. Reference gaming.” I got back text that was, technically, all of those things.
The output was aggressively median. It had the shape of my voice with none of the weight. Every sentence was grammatically defensible and emotionally dead.
The problem, I eventually realized, was role confusion. I was asking one AI to be my editor, my writer, my strategist, and my cheerleader simultaneously. This is the equivalent of hiring a contractor and telling them they’re also the architect, the building inspector, and the interior designer. You get a structure that technically has walls.
Separation of concerns — the first principle of systems architecture. You don’t put your database logic in your user interface. I’d spent 25 years applying this at work and then ignored it completely for my writing setup.
So I started over. I needed a system where each component did one thing well, where the components couldn’t see each other, and where I was the architect.
Build the Antagonist First
Most writing advice says start with the writing. I started with the editor.
This was counterintuitive. I wanted to produce pages. I wanted to see word counts go up. But the manuscript didn’t have a quality problem I could feel — it had a quality problem I couldn’t see. I needed something that could see it before I could fix it.
I built what is technically called a “developmental editor” — a system prompt with a specific editorial methodology. Phase 1: discover the theme. Phase 2: stress-test the argument. Phase 3: interrogate the structure. Phase 4: evaluate the prose. Each phase produces specific artifacts with specific criteria. The methodology matters more than the AI model running it.
Then I specialized it. I gave the editor a name (Zelda Felfenlagger — warm but exacting, zero tolerance for cheerleading), a personality, and 460 lines of book-specific context: the controlling idea, the chapter structure, the frameworks, the target reader, the competitive positioning, even the words I’d eliminated from the vocabulary. Every editorial session starts by loading that context. Every finding gets written back into it.
Zelda’s first pass caught things I’d missed across 3 months of revision.
She flagged retreat to generic language. In section after section, I’d start with a specific confession and then drift into “most people” territory by paragraph three, as if my own experience wasn’t sufficient evidence. My draft said: “Most people drown in bronze. They spend their weeks in maintenance mode.” Zelda’s note: the author’s own audit is more honest and more persuasive than a generalization. Replace “most people” with “I.”
She also caught what she called “because clause” failures — places where I stated a claim without earning it. “The death spiral crosses the work-home boundary.” Why? How? The sentence reads as true, but a developmental editor asks: have you shown the mechanism, or are you hoping the reader will just nod? This exercise — adding “because” to every major claim and seeing which ones you can actually finish — is the single most useful editorial technique I’ve picked up.
The key architectural decision: Zelda cannot write prose. She can analyze, diagnose, score, and direct, but she cannot produce a single sentence intended for the final manuscript. Her output is directives: “Rewrite the opening through the promotion metaphor.” “Compress the neuroscience by 40%.” “This section argues against your own thesis — the framing privileges personal virtue over architectural change.”
An editor who also writes will unconsciously soften her criticism to match what she knows she can produce. Separation of concerns again. The editor’s only job is to be right about what’s wrong.
The Voice Is the Hard Part (And You Have More Than One)
Once I had an editor who could identify the problems, I needed a writer who could fix them. A different system with different context, and — critically — no knowledge of what Zelda had said. The writer receives directives, not diagnoses.
Building the ghostwriter forced me to confront something uncomfortable: I write in two voices, and only one of them is any good.
Voice one — the labnotes voice — produced this:
Process Infrastructure For My Favorite Client:
- System Level: Yearlong, multi-phase contracts with defined goals
- Project Level: Sprint kickoffs and retrospectives
- Task Level: Trello board for task tracking, daily standups
Process Infrastructure For Home Maintenance:
- My wife writes things on a whiteboard
- We ignore them for months
- Eventually they won’t erase, even with spray
Voice two — the manuscript voice — produced this:
“Most people drown in bronze. They spend their weeks in maintenance mode, telling themselves they’ll get to the important stuff once they catch up. They rarely catch up.”
The whiteboard joke works because the structure carries the humor. The format IS the punchline — a professional org chart dissolving into domestic entropy. The “most people” passage is perfectly grammatical and perfectly replaceable. Any competent writer could have produced it. That’s the problem.
The even bigger problem is that they’re both genuinely me. I’m 45. Not likely to be making any big changes as a writer at this point. But I also kind of need to.
So, I documented both voices with forensic specificity. Not a paragraph of guidance — a taxonomy. Six signature moves the writer must use. A separate list of anti-patterns the writer must avoid. The anti-patterns turned out to be more important than the patterns, because they defined the boundary between the voice that works and the voice that doesn’t.
The six moves: confess before teaching (first person before second person). Structural comedy (escalating lists, side-eye parentheticals, format-as-punchline). Specific numbers over generalizations — 19, not 20; 3-4, not “several.” Physical emotion without interpretation (“I sagged” rather than “that’s what burnout feels like”). Gaming and tech references deployed as native vocabulary, never explained. Executive function limitations treated as design constraints rather than character flaws.
The anti-patterns: third-person hypotheticals (“Imagine someone who…”). Sustained professorial register. Explaining the humor. “Most people” when “I” is more honest. Emotional interpretation from retrospective distance. Round numbers when real counts exist. The word “sovereignty.” Generic self-help language of any kind. Those are not garden variety anti-patterns — that’s my bad writing, and as the decision maker in this joint, the system needs to stop me when I insert it, which I do, almost every time I get involved!
Both lists went into the system prompt. When the ghostwriter produces a draft, I can evaluate it against specific, documented criteria instead of the vague sense that “something feels off.” The criteria are teachable, repeatable, and — because they’re written down — consistent across sessions and models.
The Handoff Is the Product
The revision system works like this:
Zelda analyzes a chapter and produces a scored evaluation with specific directives. I read those directives and decide which ones to accept. (This is the most important step. The human decides what to fix.) The ghostwriter receives the approved directives plus the existing prose and produces a revised draft. I refine the draft — sometimes a sentence, sometimes a section, sometimes I throw it out and write from scratch using the directives as a map. If I want a quality check, Zelda scores the revision.
Zelda analyzes. I approve. The ghostwriter drafts. I refine. Zelda scores if needed.
Version control makes the whole thing work. Every chapter revision is a pull request with a diff. I can see exactly what changed and why. When Zelda flags a problem in Chapter 4 that echoes something she caught in Chapter 2, the context lives in the commit history, not in a chat log that expired three sessions ago. Context loss is the single biggest problem with AI creative work. You have a brilliant conversation, close the window, and start over from zero the next day. Version control solves this. The book’s editorial history is 106 commits across 47 pull requests.
I also built a QA sweep tool — a script that greps the entire manuscript for eliminated vocabulary, voice anti-patterns, and structural inconsistencies. It caught 8 instances of “sovereignty” that survived manual editing. It caught 7 places where a chapter referenced a framework by an outdated name. Automated quality assurance for prose sounds absurd until you realize that a 50,000-word manuscript has exactly the same consistency problems as a 50,000-line codebase.
The System Is the Argument
I have the executive function of a garden hose. I am never going to be the writer who produces clean prose at 5 AM every morning. Design constraints, not moral failures.
So I built a system that compensates. It does the work I’m bad at — consistency, pattern detection, maintaining context across 50,000 words — and I do the work the system can’t: deciding what matters, knowing what’s true, writing the 19 sentences that require having actually lived my life.
The manuscript is better than I could have written alone. Whether it’s any good is a different question — one I’ll find out when people read it. But “better than I could have written alone” is a low bar I’m comfortable clearing, given that the alternative was a competent manuscript that read like it was written by a well-meaning stranger. (And yes, I wrote this using the system!)