The Split Brain: What Neuroscience Can Teach You About Running Two Different Reasoning Models
I asked claude.ai to evaluate the sales pipeline strategy. It came back with a sharp analysis — restructure the outreach cadence, double the daily volume, introduce a heat scoring system based on engagement signals.
Good thinking. Clear recommendations. One problem: Claude Code had already built all of it. The heat scoring was live. The volume had been doubled three weeks ago. The cadence was shipping daily at 7am.
Claude.ai had just spent twenty minutes reinventing what Claude Code had already committed to the codebase.
The left hand couldn’t see what the right hand was doing.The Split Brain
In the 1960s, Roger Sperry and Michael Gazzaniga did something that would earn Sperry the Nobel Prize. They studied patients whose corpus callosum — the thick bundle of nerve fibres connecting the brain’s two hemispheres — had been surgically severed to treat epilepsy.
What they found was extraordinary. Each hemisphere could still function. The left hemisphere could still speak, reason, analyse. The right hemisphere could still perceive, recognise patterns, respond to visual information. Individually, each side was competent.
But they couldn’t coordinate.
Show a split-brain patient an image in their left visual field (processed by the right hemisphere) and ask them what they saw. They’d say “nothing” — because the left hemisphere, which controls speech, genuinely didn’t know. The right hemisphere saw it clearly but had no way to tell the left hemisphere what it had seen.
The left hand would reach for an object the right hand didn’t know existed. One hemisphere would laugh at something the other couldn’t access. Two competent systems, sharing the same skull, operating in complete isolation.
That’s exactly what happens when you run two models without a shared surface. Claude Code builds a heat scoring system on Monday. Claude.ai recommends building a heat scoring system on Tuesday. Two competent systems, sharing the same business, operating in complete isolation.The Interpreter
Gazzaniga discovered something even stranger. When the left hemisphere — the verbal, reasoning side — didn’t know why the body was doing something, it didn’t say “I don’t know.” It made up a plausible explanation. Confidently. Convincingly. Completely wrong.
He called it the interpreter. The left hemisphere’s compulsive need to construct a narrative, even when it doesn’t have the facts.
I’ve seen the exact same behaviour with models. Ask claude.ai about a system it hasn’t seen and it won’t say “I don’t know what you’ve built.” It will construct a plausible analysis based on what it thinks you probably built. Confident. Convincing. Sometimes completely wrong.
This is why context matters more than capability. A reasoning model without access to what actually exists isn’t reasoning — it’s confabulating. Building coherent narratives from incomplete information. The narrative sounds right. The gap between the narrative and reality is where the damage happens.
The problem isn’t that the model is wrong. The problem is that it doesn’t know it’s wrong. The interpreter never flags its own uncertainty. It just tells a story.The Two Hemispheres
Claude Code (the right hemisphere): Executes. Reads the codebase, runs commands, writes code, ships commits. It knows what exists because it built it. But it doesn’t reason about whether it should exist.
Claude.ai (the left hemisphere): Reasons. Evaluates strategy, challenges assumptions, thinks through implications. It knows what should exist. But it doesn’t know what already does.
Without a connection between them, each session starts from a partial picture. The execution model builds things the reasoning model already rejected. The reasoning model recommends things the execution model already built. And neither hemisphere knows it’s confabulating — each is doing its job with whatever it can see.
The Corpus Callosum
Sperry’s insight wasn’t that each hemisphere was broken. Each hemisphere was fine. The problem was the connection between them.
The corpus callosum is the largest white matter structure in the brain — roughly 200 million nerve fibres carrying information between hemispheres at speeds that make the coordination feel instantaneous. You reach for a cup with your right hand while your left hand steadies the saucer. You read words with your left hemisphere while your right hemisphere processes the speaker’s tone. The integration is so seamless you never notice it happening.
Until it’s gone. Then you notice nothing else.
I had the same problem with my own setup, and I didn’t recognise it for months. I was using Claude Code to build the platform — Django backend, Astro sites, the client pipeline, the audit system, the operational tools. Hundreds of commits. Sophisticated architecture. Real, working software.
And I was using claude.ai for strategic thinking — evaluating the business model, challenging assumptions, reasoning through decisions, planning the next moves.
Both were excellent at their jobs. Neither knew what the other was doing.
The Wiki Is the Corpus Callosum
The solution was embarrassingly simple once I saw the problem.
A wiki. In a Git repository. Shared through GitHub.
Claude Code writes to it. Updates documentation after building features. Records what was built, what decisions were made, what patterns were established. The git history captures the trajectory — not just what exists now, but how it got there and what was tried and abandoned along the way.
Claude.ai reads from it. Through GitHub integration, it can see the entire wiki, the codebase structure, the commit history. When I ask it to evaluate a strategy, it’s not guessing what exists. It’s reading what exists. When it recommends a direction, it’s building on what’s already been built, not starting from imagination.
The wiki isn’t documentation in the traditional sense. It’s the shared memory layer between two systems that would otherwise be blind to each other. Every page is a bundle of fibres in the corpus callosum — carrying what one hemisphere knows to the hemisphere that needs to know it.
When Claude Code builds a new feature, it updates the wiki. When claude.ai reasons about the business, it reads the wiki. The loop closes. The left hand sees what the right hand is doing.
What Coordination Actually Looks Like
Here’s what changed once the corpus callosum was in place.
Before: Claude.ai suggests building a client pipeline board with stages and heat scoring. I spend ten minutes explaining that Claude Code already built it. Claude.ai apologises and adjusts. Twenty minutes wasted, and the adjusted strategy is still working from a partial picture.
After: Claude.ai reads the wiki, sees the pipeline documentation, reads the commit history showing the heat scoring implementation, and opens with: “The pipeline is live with heat scoring. The gap I’m seeing is that stale contacts aren’t being re-engaged. Here’s a cadence for that.” First sentence is useful. No catch-up. No reinvention.
Before: Claude Code builds a feature based on what seems logical from the codebase. Sometimes that aligns with the strategic direction. Sometimes it doesn’t, and I have to redirect.
After: Claude Code loads the session context — which includes strategic decisions documented in the wiki — and builds in the direction that’s already been validated. The reasoning happened in claude.ai last week. The execution happens in Claude Code today. The wiki carried the decision between them.
Why Nobody Noticed
I spent months optimising each tool individually — better prompts, better session starts, better outputs — without realising the gap wasn’t within either tool. It was between them.
That’s the conversation I wasn’t seeing anywhere. Which model is best, which prompt technique works, and how to get better outputs from a single conversation. All of which makes each hemisphere more capable — none of which addresses the fact that the hemispheres are operating in isolation.
You can have the world’s most capable left hemisphere and the world’s most capable right hemisphere. If they can’t communicate, the patient still can’t tie their shoes.
The Real Stack
- Claude Code for execution (the right hemisphere)
- Claude.ai for reasoning (the left hemisphere)
- A Git-hosted wiki as the corpus callosum
Two of those three are commodities anyone can download. The third — the shared memory layer, built over months of faithful work, encoding what was built and why — that’s the part that can’t be replicated by signing up for an account.
Building the Bridge
If you’re using Claude Code, claude.ai, or any combination of tools for anything beyond one-off questions, you have a split-brain problem whether you’ve noticed it or not.
Every conversation that starts with “I’ve been working on…” is a symptom. You’re manually being the corpus callosum — carrying context from one session to the next, from one tool to the other, from your memory to the model’s blank slate. You are the 200 million nerve fibres, and you’re doing it with language, one conversation at a time.
The wiki replaces that. Not partially. Entirely. The context lives in a shared surface that both systems can access. You stop carrying and start conducting.
It doesn’t have to be extensive. It starts with one page. How the business works. What’s been built. What the current priorities are. One document, committed to a Git repository, readable by both your execution tool and your reasoning tool.
By month three, the wiki has grown because the work has grown. Every feature built gets documented. Every strategic decision gets recorded. The corpus callosum thickens with use — exactly like the biological one does.
The Principle
I call this The Split Brain. Not because the tools are broken — each half works beautifully. Because the default architecture is split, and they’re compensating for the disconnection manually without realising there’s a structural solution.
Sperry won the Nobel Prize for proving that the connection between hemispheres matters more than the capability of either hemisphere alone. Sixty years later, the same insight applies to anyone running multiple models: the shared surface matters more than the individual tool.
The model is a commodity. The situation is the edge.
The wiki is the situation. The corpus callosum between two capable systems that, without it, are just two brilliant hemispheres reaching for objects the other can’t see.
Connect them. The left hand sees what the right hand is doing. And for the first time, they work together.
Read more about the system:
- Teaching Claude Code Taste
- The Atomic Commit: Why Your Git History Is Business Intelligence
- Ingeniculture: The Word for What’s Missing
- The Magic Word
- Stop Chatting With AI. Start Building With It.
If you want this kind of thinking applied to your business — here’s how I work with clients, or get in touch.
Roger Sperry, Nobel Prize in Physiology or Medicine, 1981. Michael Gazzaniga, who named the interpreter. The corpus callosum — 200 million fibres, one shared surface, the reason you can tie your shoes.
Tony Cooper
Founder
Put My Crackerjack Digital Marketing Skills To Work On Your Next Website Design Project!
Get Started