The Simplest Thing in Computing That Your Website Can't Do
I’m going to tell you about something that changed how I think about websites. Not a framework, not a tool, not a plugin. A single command that’s been available on every computer since 1973 and that I’d never once used in twenty-six years of building websites.
It’s called grep. It searches files for words.
That’s it. That’s the whole thing. You type a word, and it finds every file that contains it. Every page on your website, searched in under a second. Every mention of an old phone number. Every broken link. Every page that references a product you discontinued last year.
I typed a command on one screen and watched the website change on the other. And I thought: why have I been doing this the hard way for a quarter of a century?
The Discovery
The first time I ran grep across a website I was managing, I expected it to feel technical. Complicated. The sort of thing where you stare at the terminal wondering what went wrong.
It found every instance of the word I was looking for across fifty pages in less than a second.
I sat there for a moment. Then I searched for an old phone number that a client had changed six months ago. Three pages still had the old one. I’d missed them in the manual check because manual checks always miss something — you’re clicking through pages one at a time, and by the fifteenth page your eyes are glazing over.
Grep doesn’t glaze over. It reads every file, every time, and it doesn’t get bored.
Then I searched for pages that mentioned a service the client no longer offered. Two pages. Then I searched for broken internal links — pages that referenced URLs that had changed during a site redesign. Seven. All invisible to a visitor clicking through the site, all silently damaging the search rankings.
Grep searches the contents of files for a word or pattern. If your website content lives in files — Markdown, MDX, HTML — grep can search every page in under a second. If your content lives in a database — WordPress, Wix, Squarespace — grep can’t reach it. Not because of a technical limitation. Because the content isn’t in files. It’s in database rows behind an authentication layer that no command-line tool can see.
You might be thinking: I can already search. Every text editor has find and replace. WordPress has it. Even Wix has it, buried somewhere in the menus.
But find and replace works inside one page, inside one tool, one field at a time. You’re holding a torch in a dark warehouse, checking one shelf. Grep turns the lights on in the whole building. Every page. Every file. Every match, with the exact line number and the file path, in under a second. And it chains — find every page with an old phone number, change it, see a diff of exactly what changed, and commit the update with a date-stamped record. One operation. Not fifty repetitions of the same click.
The discovery wasn’t that grep exists. Developers have known about it for fifty years. The discovery was that I’d spent twenty-six years building websites on platforms that made this impossible — or at best, made it so slow and manual that the audit never happens at all.
The Delegation
Here’s where it gets interesting. I don’t even type the grep command most of the time.
I say something like “sprinkle some in-context links across the insights.” The system — Claude Code, working from the terminal — reads every page, finds the natural connections between topics, and adds links where they support the sentence. Not at the bottom of the page in a “Related Articles” box. Inside the prose, where the reference naturally makes the current paragraph richer.
This is fundamentally different from how a plugin like LinkWhisper works. LinkWhisper scans for keyword overlap through a database abstraction. It suggests links based on matching words — mechanical, algorithmic, often forced. “You mentioned ‘SEO’ on this page, and you have another page about SEO. Link them?”
When the system can read the actual files — the prose, the argument, the flow of the piece — it places links where a human editor would place them. Because it’s reading the same thing a human editor would read. Not a database query. The actual words.
The system reads every page the way an editor would read them. It places links where they belong because it can see the whole publication at once.On 13 March 2026, I ran an internal linking pass across fourteen published insights. One commit. The system read every piece, identified where each one naturally referenced another, and added the cross-links in context. Not a bulk operation — a considered editorial pass across the entire publication, in minutes.
That’s Retrofit First in action. I didn’t write new content. I made existing content work harder by wiring it together. The search engines responded because internal links are how they understand the structure of your expertise — which pages are related, which ones matter, how the whole thing fits together. Fourteen unlinked insights are fourteen orphaned pages. Fourteen cross-linked insights are an interconnected body of work.
The Overview
The third layer is the one that changed how I think about managing a website.
Stand back. Look at the whole site. Not one page at a time — all of it, at once. Like checking the dining room before service. Are the tables set? Is anything out of place? Does the front of house pull together as one connected thing, or is it fifty separate pages that happen to share a domain name?
With file-based content, that question is answerable. Grep every page for meta descriptions — are any missing? Are any truncated? Grep for image alt text — are any empty? Grep for the old company name after a rebrand. Grep for “2024” in content that should say “2025.” Grep for pages that don’t link to any other page — orphans that search engines struggle to find and rank.
I built a site crawler that found 541 redirect chains on this website in its first pass. Every internal link was triggering a redirect before reaching the page — invisible to visitors, costly to search rankings, and impossible to spot by clicking through the site manually. One command. 541 problems identified. All fixed in the same session. Health score went from 78.7 to 93.9 out of 100.
Try doing that in Wix. You can’t. The blog is separate from the pages. The pages are edited one at a time through a visual editor. There’s no way to search across everything, no way to audit everything, and no way to change everything. You’re clicking through pages one by one, forever.
WordPress is better — you have a database you can query, and WP-CLI gives you terminal access. But the content is still serialised data in MySQL rows. You can’t git diff it. You can’t see the publication history in a commit log. You can’t grep the actual prose the way you’d grep a folder of files. Every operation goes through an abstraction layer, and every abstraction layer is a place where something gets lost.
The Architecture Comparison
I think this is worth being specific about, because the differences are structural, not tribal.
Wix: Open each page in the editor. Use the browser’s find function. One page at a time. Fifty pages, fifty searches. If you miss one, you’ll never know.
WordPress: Write a database query against wp_posts to search the post_content field. Or install a plugin. Or export everything to a file and search that. Multiple steps, multiple tools, and the content you’re searching isn’t the content on the page — it’s a serialised version stored in a database that may include shortcodes, block markup, and HTML entities.
.md files: grep -r "free consultation" src/content/ — every page, every match, every file path. Done. Under a second. And the content you’re searching is the exact same content that appears on the website. No abstraction. No translation. The file is the page.
This isn’t a preference. It’s physics. The SEO industry sells complexity — plugins for linking, plugins for schema, dashboards for audits, subscriptions for monitoring. Every solution adds a layer between you and your content. Grep removes every layer. The simplest architecture enables the most powerful operations.
What This Actually Means for Your Website
I’m not suggesting you learn grep. I’m suggesting you ask a question about your website that you’ve probably never thought to ask: can the person managing it search every page at once?
If the answer is no, then every audit is incomplete. Every linking pass is partial. Every content update is a manual check that relies on someone remembering where things are. And as the site grows — more pages, more services, more content — the gap between what you think your site says and what it actually says gets wider every month.
If the answer is yes — if your content lives in files that can be searched, cross-linked, audited, and batch-edited — then the site doesn’t just grow. It compounds. Every new page strengthens the existing ones through internal links. Every audit catches things that would have slipped through. Every update touches everywhere it needs to, not just the pages you remembered to check.
The platform either carries your thinking or it blocks it. That’s not a technology preference. It’s the difference between a bottleneck and a throughput.That’s what I think ingeniculture is really about. Not the model. Not the commands. The architecture that lets the simplest operations happen — and then gets out of the way.
Related: Content As Code · The Atomic Commit · The Power of Retrofitting · The CLI Is the Interface · The Moment I Stopped Clicking Buttons · Ingeniculture · The Correction Loop
If you want a website built on architecture that compounds instead of decays — here’s how I work, or get in touch.
Tony Cooper
Founder
Put My Crackerjack Digital Marketing Skills To Work On Your Next Website Design Project!
Get Started