The Tier System: What Excalibur Taught Me About Loading Context
The prompt engineering world is built on the opposite assumption. Better instructions, better outputs. The leverage is in the asking — the right phrasing, the right structure, the right command. I spent months in that world before I started asking a different question entirely.
Not “what should I tell it?” But “what does it see?”
That shift produced the tier system. Not by design. By asking the question and following where it led.
Excalibur
I called it Excalibur. The one sword to wield. The one eye to see. A single document that would hold everything Claude Code needed to know about the business — philosophy, methodology, client data, financial metrics, strategic decisions, operational procedures, voice rules, principles. 4,978 lines. Everything in one place.
The mental model was simple: AI can search a 10,000-line file. Load it everything, let it find what it needs. Total knowledge. Total context. One file to rule them all.
The metrics went stale. MRR figures from last Tuesday sitting next to principles that hadn’t changed in months. The philosophy got buried under numbers that needed daily updating. Every session started with “is this still current?” instead of “let’s work.” The context window filled with 4,978 lines and the system spent its intelligence wading through information that wasn’t relevant to what I was actually cooking.
Excalibur collapsed under its own weight. Commit 23811e4a. The sword went back in the stone.
Fill it with 4,978 lines and the signal drowns in the noise, regardless of how good the signal is.
The Astro Wiki
So I broke Excalibur into pages. Logical enough. Build a wiki, organise it properly, link it together. And because I build in Astro, I built the wiki in Astro. Compiled, styled, navigable. A proper knowledge base.
Not because of the content — because of the form. A compiled Astro site isn’t readable by a CLI tool working from a repository. The corpus callosum has to be readable by both hemispheres. I’d built the solution in the tool I knew rather than asking what surface the system could actually read.
The answer, when I asked, was embarrassingly simple.
Markdown files. Plain text. In a git repository. No build step between the writing and the reading. The system can see markdown directly. Everything else is a wall.
Two failures. Each one teaching something the other couldn’t. Excalibur taught me the difference between search and load. The Astro wiki taught me that the surface matters as much as the content. You can have the right information in the wrong place and it’s invisible.
The Metrics Problem
The markdown wiki worked. So I filled it. Every concept got a page. Every process. Every client note, every strategic thought, every decision that felt important enough to write down. The wiki grew to over five hundred pages.
Then something I hadn’t anticipated from Excalibur. The metrics went stale again.
With Excalibur, I could see why — everything in one file, the stale data obvious. With the wiki, it was subtler. A page about the sales pipeline with last month’s conversion rate buried in paragraph three. A page about the client portfolio with an MRR figure that was three weeks old. Individual pages, each looking current, each quietly wrong.
The problem wasn’t size this time. It was metabolism.
Put them in the wiki and the wiki becomes untrustworthy. The system starts second-guessing everything it reads because some of it is stale. You can’t tell the AI “trust this document except for the numbers.” It reads the whole thing.
The solution arrived when I asked the question properly: where does this kind of information naturally live?
Metrics aren’t prose. They’re state. They belong in a database, queried live. Not written down and updated — pulled from the source of truth at the moment they’re needed. Django commands that query live data. Not a wiki page that was accurate last Tuesday.
That distinction — between storing information and reflecting state — is the insight that Excalibur never forced. The metrics problem is invisible until the wiki is working. It’s a second-order discovery. You can only find it once you’ve solved the first problem.
The Mantra
Three failures. Each one pointing at the same architecture.
Context — how I work. Methodology, voice, principles, workflow. Changes rarely. These are the stable foundations. This belongs in the wiki.
State — what’s true right now. MRR, client count, what was built yesterday. Changes daily. Sometimes hourly. This belongs in the database, queried live.
Metrics — performance data, keyword rankings, financials, pipeline numbers. This lives in the tools — DataForSEO, Google Search Console, Stripe — pulled on demand.
Each one lives where it naturally belongs. Mix them and they corrupt each other. Separate them and the system knows what to trust.
The Tier System
Once the wiki held only context, the next question was how to load it. Five hundred pages had become fifty — culled by a single test: does this fire? Does Claude Code load this page and produce better work because of it? Most didn’t. They were documentation. Not signal.
But even fifty pages loaded at once would fill the context window before the session started. The question was how to load the right context for what was actually being cooked right now.
The tier system wasn’t designed. It was discovered by asking that question.Some documents needed to be present every session — identity, methodology, principles, the operational lookup tables. If Claude Code didn’t have these, it would fumble on basic questions. These became T1. Instinct. Always loaded. Under 250 lines each.
Some documents needed to appear when the topic came up. Say “Monday Service” and the client workflow loads. Say “The Empire” and the luxury architecture loads. Not always present, but instantly available. T2. Workflow. Loaded on trigger.
Some documents were needed only when the work explicitly demanded them. Deep strategy docs, architecture references, zone-specific playbooks. T3. Reference.
And some documents — the archive, the founding narrative — should never be bulk loaded. Too large, too dense. Search them for a specific fact. T4. Deep reference.
The naming came from behaviour, not theory. T1 documents behave like instinct — present before you think about them. T2 documents behave like workflow knowledge — there when the task demands them. The tiers describe how the knowledge is used, not how it’s filed.When I start a session now, I say one word: “Initialise.”
Four T1 documents load. Under a thousand lines total. Claude Code knows who I am, how I work, what the business looks like, what the current priorities are.
Then Joan runs. Joan — named after Joan Holloway in Mad Men, the person who actually knew where everything was and what needed doing — is a Django command that handles operational briefings. What’s overdue, what’s due today, which clients haven’t been contacted. Not a wiki page. The actual numbers, from the source of truth, not a document that was accurate last Tuesday.
I say “Monday Service” and the T2 workflow documents load. The client methodology. The cadence. The briefing standard. Marco Pierre White ran his kitchen with mise en place — every ingredient prepped, every station ready, before the first ticket hit the pass. This is the same discipline applied to context. Everything needed for the work, loaded at the moment the work starts. Not before. Not everything. Just what’s on the rail.
This is how “pipeline” produces a cascade of intelligence. This is how the commit history becomes readable. The tier system is the infrastructure underneath all of it. The plumbing that makes the water flow.
The Wiki Reads the Work
One problem remained. The insight count — how many pieces have been published, what the front of house looks like — is exactly the kind of metric that goes stale in prose.
The solution is that the wiki doesn’t store the number. It reads the files.
Every new insight published is a markdown file in the repository. The wiki references the folder. The count is always current because it’s derived from what actually exists. The moment a piece publishes, the count is current. Nobody updates it. It updates itself.
That’s the distinction between storing a number and reflecting a state. “I have published 14 insights” stored as text goes stale. A wiki that reads the insights folder and counts what’s there doesn’t. One is a photograph. The other is a mirror.The Question That Doesn’t Stop
The tier system is a current answer to a permanent question.
Context windows are expanding. Claude’s memory is improving. New tools arrive with different architectures. The right response isn’t to defend the tier system — it’s to keep asking whether it still earns its place. Does T1 still need to be under 250 lines, or can it hold more now? Does the trigger system for T2 still serve the work, or has something changed that makes it redundant?
Excalibur was the wrong answer asked in the right spirit. The tier system might face the same test. That’s not a weakness in the architecture. That’s the discipline that keeps it honest.
The three failures — Excalibur, the Astro wiki, the metrics staleness — each one was the system pushing back. Telling me something didn’t belong there. Guiding me toward the architecture that emerged when I stopped imposing and started asking.
Stop imposing your ideas on AI. Stop trying to tell it how to work. Ask for the best solution and it will guide you.
The Tier System is what’s left when you do.
Read more about the system:
- Teaching Claude Code Taste
- The Atomic Commit: Why Your Git History Is Business Intelligence
- The Split Brain
- The Magic Word
- Ingeniculture: The Word for What’s Missing
Three failures, one architecture. The tier system isn’t designed — it’s what’s left after Excalibur falls.
Tony Cooper
Founder
Put My Crackerjack Digital Marketing Skills To Work On Your Next Website Design Project!
Get Started