Fat Prompt, Thin Client, Joined Backend
How to avoid ending up renting your own expertise back from a vendor and your own data back from a database.
Every few months, a new AI-powered dev tool ships with a breathless list of features: custom agent loops, retrieval pipelines, and orchestration layers. And every few months, the previous generation of those features becomes redundant because the model just... learned to do it natively.
This is the Fat Harness trap. It’s the natural temptation to build a Rube Goldberg machine of intricate logic—state machines that “manage” the AI’s reasoning. These are impressive to demo but fragile in one direction: upward. Every capability you hand-engineer into the harness is something the next model release might do better. Fat harnesses get eaten by model improvements, create vendor lock-in, and encode yesterday’s constraints (like 8K context workarounds) as dead weight.
Thin Client: Give the Model Hands, Not Your Brain
Bare model inference isn’t enough to delegate tasks. You can’t just wish a bug away; something has to actually touch the file system and execute code. This is where the Thin Client comes in.
The goal isn’t a “Zero Client,” but a shift in responsibility:
The Brain (Model): Handles the reasoning, the branching, and the “What do I do next?”
The Hands (Thin Client): Provides the robust “primitives”—authentication, file I/O, and secure sandboxes.
If your harness has complex “If/Then” logic to guide the model, you’ve built a brain. If your harness is a high-performance runtime that executes the model’s intent, you’ve built hands. Build hands.
Where the Value Actually Lives: The Fat Prompt
If the harness is a commodity runtime, the moat is the Fat Prompt.
This isn’t “system-prompt engineering” (telling the AI to speak like a pirate). The valuable prompt is the full context you bring to the table—the 2-page description of a high-value skill or a complex business process.
The key is that these “fat skills” must live in raw text that you own.
Not in a proprietary vendor memory format.
Not in a hidden “learnings” directory controlled by a specific tool.
Markdown files in your repo or design docs in version control are portable. They survive model migrations and vendors unable to get enough compute to serve you. When a 10x better harness ships, your knowledge comes with you because it was never locked in.
The Integration Moat: Thick Backends
A “fat prompt” is only useful if it isn’t starved of data. This leads to the real hard problem: The Thick Backend.
Every organization has the same disease: data scattered across Salesforce, QuickBooks, Slack, and internal DBs. These systems all contain the same entities—the same people and deals—but described in different schemas that often contradict each other.
Your AI needs to know everything you know. And it’s never in one system.
And YOLO’ing it all into one pile is probably not going to work.
This is where you should invest. You need a cleaned backend that resolves identities across silos to present a unified, canonical view. This is the unsexy, difficult infrastructure that determines if your AI is actually informed or just guessing.
Everyone YOLO’ing all their data into Claude or ChatGPT via 30 different MCPs is mostly getting context rot.
The Stack, Restated
Fat Prompt: Your expertise in portable text you control. Models and harnesses are transient; your domain knowledge is the durable asset.
Thin Client: The “hands.” Use the lightest-weight harness that can execute the model’s intent. Don’t build a brain that the next model will just replace.
Thick Backend: The integration layer. A canonical data model that any model can query.
You need fat skills you own, a thin harness that you don’t to drive the AI well, and a thick backend that knows your all your key data. Anything less and you’re renting your own expertise back from a vendor.

