Two Layers, One Mental Model: How I Stopped Fighting My AI Tools
The mess that triggered this A few months ago, my AI tool usage looked like most engineers’ AI usage. I was exploring several AI coding assistants — IDE-integrated ones, terminal agents, chat-based models — often more than one at a time. There was no system, no structure, no baseline. Which meant five things, all bad: Every session started from zero. I re-explained the project to the AI on every new chat — what the repo does, how the stack is wired, what the deployment model looks like. Same task, different answers. The same question in two different tools got different and often contradictory output, because the prompts and pre-loaded context were different. No reuse. Whatever prompt finally worked died in my local history. No guardrails. The AI happily edited files it had no business touching, because nothing told it not to. The tools’ actual capabilities — skills, memory, rules, hooks, subagents — were unused. I had powerful agents and was using them as fancy autocomplete. Underneath all five was a vocabulary problem. CLAUDE.md, AGENTS.md, skills, workflows, workpacks, rules, agents, subagents, hooks, MCPs, plugins. Read any one piece of documentation and you walk away with a useful but isolated definition. Read all of them and the definitions blur into each other. Is a skill just a small workflow? Is a rule just a CLAUDE.md fragment? When do I reach for a subagent versus a skill? Where do MCPs fit? ...