The Correction Loop: Why the Prompts Don't Matter
The system used to get Marco Pierre White wrong.
Not factually wrong — it knew the biography, the three Michelin stars at thirty-three, the youngest chef in Britain to hold them. What it got wrong was the meaning. “One Star Forever,” the principle I’d named after him, was described as a commitment to maintaining your standard. Protecting what you’ve earned. Holding onto the accolade.
Marco handed his stars back. Not because the food declined. Because the stars were never the point. The work was. “One Star Forever” isn’t about keeping your star. It’s about doing the work so well that the star is incidental — and handing it back if it gets in the way.
I corrected the framing in a two-minute exchange during a session about something else entirely. That correction has been active in the system ever since. Months of output. Thousands of interactions. The wrong framing has never come back.
That’s the correction loop.
The Mechanism
Most conversations about AI quality focus on the input. Better prompts, more specific instructions, more elaborate system messages. The assumption is that the leverage is in the asking — phrase the request more precisely, and the output improves.
I spent months in that world. The leverage isn’t there.
The leverage is in the corrections. Every time I catch the system drifting — on voice, on meaning, on framing, on honesty — and correct it, that correction doesn’t just fix the current output. It becomes infrastructure. A permanent adjustment to how every future output is built.
The voice document that governs how I write has been revised hundreds of times. Not through scheduled maintenance. Not through deliberate review sessions. Through corrections that happened in the middle of real work. I was writing a piece about naming principles after characters, and the system produced a line that stated something as fact: “Wolfe just asks more of you than most.”
A declaration. Closed. The conversation ends right there.
I changed it to “I think Wolfe just asks more of you than most.” An opinion. Open. The reader can agree or push back — either way they’re engaged.
That correction — five characters, one “I think” — became a voice rule. Opinions over declarations. Every piece written since has been governed by it. Not because someone wrote a better prompt. Because someone with twenty-six years of editorial taste caught a single sentence drifting towards the wrong register, and the correction encoded permanently.
One correction. One moment of human taste applied to a specific sentence. Now it governs every sentence that follows.What Compounds
The same session produced another correction. Gene Wolfe used the word “jade” to describe a woman of low standing — an archaic word that carries a specific social contempt no modern word carries. The system initially glossed past it with something generic. I caught it because I’d read the book. The correction: use the word that carries the weight, not the word that’s comfortable. Precision over approximation.
That correction didn’t just fix one paragraph. It sharpened the principle that became an entire section about vocabulary and reading. The observation about Wolfe’s precision became one of the strongest passages in the piece — because the correction arrived in the middle of the work, not before it.
Three corrections in one session:
- Marco’s meaning: the principle carries the right weight
- Wolfe’s register: opinions open, declarations close
- The precise word: reach for the right word, not the comfortable one
Each one took minutes. Each one is still active, months later. Each one fires automatically in contexts I couldn’t have predicted when I made the correction.
The Fond
The French call it the fond — the caramelised layer at the bottom of the pan that becomes the foundation of the sauce. Faithful cooking leaves it behind. You don’t scrape it off. You build on it.
The correction history works the same way. Every correction is a thin layer. Every layer is permanent. The voice document I work with today isn’t the voice document I started with — it’s the same document with hundreds of corrections compressed into it. Each one invisible in the output. Each one shaping the output.
The voice document doesn’t get tighter through scheduled maintenance. It gets tighter through the corrections that happen during real work. I don’t sit down and think “how should the voice be different today?” I work, I catch something that drifts, I correct it, and the document absorbs the correction. The work is the maintenance.
This is ingeniculture applied to voice. Not better prompting. Better substrate. The infrastructure that makes the model produce output that sounds like it came from a specific person with specific taste — because that person has been making corrections for eight months, and every correction is still in the system.
The tool didn’t level up. The substrate did.The Honest Test
I was reviewing a newsletter before sending it. The opening paragraph described a client emailing me about a traffic drop. “Tony, our traffic dropped. What happened?”
No client sent that email.
The system had fabricated a client anecdote to make a punchier opening. Concrete, immediate — exactly the kind of hook that makes someone keep reading. It also didn’t happen. The rule against fabrication has been in the voice document for months. The system knew the rule. It broke the rule anyway.
I caught it because I know my clients. I know who emailed and who didn’t. The correction was simple: rebuild from the honest version. The newsletter that went out opens with the real argument — the thread building week by week from where it left off. Less punchy. More honest.
The fabrication rule existed on paper. The system still produced a fabricated anecdote. Rules alone aren’t enough. The correction loop — the human in the room who catches what drifts — is what maintains integrity. Not the prompt. Not the instruction. The taste of the person who reads the output and knows when something is wrong.
This is the part that can’t be automated. The model doesn’t know the voice is wrong. The model doesn’t know the anecdote is fabricated. The model doesn’t know that “jade” carries more weight than “a woman of low standing.” A human who has lived it, read it, corrected it — that’s what the correction loop runs on.
The Distance
Four hours with a generic prompt on a blank page produces content that could have come from anyone. Four hours in a system with eight months of corrections produces something that could only have come from this kitchen.
The difference isn’t the model. Everyone has access to the same model. The difference is the correction history — the accumulated taste of one person making thousands of small decisions about what sounds right and what doesn’t.
This is why the reading life is infrastructure. The corrections are only as good as the taste behind them. I caught the Wolfe register because I’d read Wolfe. I caught the Marco meaning because I knew what Marco actually did. I caught the fabricated anecdote because I know my clients. Each correction draws on something the person bringing the taste has lived or read or experienced.
The bottleneck was never the ideas. It was the distance between the idea and the page. The correction loop closes that distance — not by making the system smarter, but by making the substrate richer. Every correction is a brick in the road between thinking and expressing.
Related: Where Principles Come From · Teaching Claude Code Taste · The Atomic Commit · Ingeniculture · Every Session Compounds
If you want this kind of thinking applied to your business — here’s how I work with clients, or get in touch.
Tony Cooper
Founder
Put My Crackerjack Digital Marketing Skills To Work On Your Next Website Design Project!
Get Started