AI Workflow overview.png

The chain: Scope → Plan → Execute → Verify → Wrap-up. Skip any step and the output suffers.


Foundation: the rulebook

  • CLAUDE.md — coding conventions, design patterns, git workflow, verification rules
  • Checked into git — the whole team benefits
  • Keep it under 200 lines. If it belongs in a linter config, it doesn’t belong here.
  • Every mistake becomes a new rule. The file compounds over time.
  • Rules/ — modular instruction files, path-scoped. Load alongside CLAUDE.md automatically.
  • Skills — on-demand expertise (deployments, migrations, security review). Can spawn subagents.
  • Permissions — pre-allow build/test scripts, deny destructive commands and .env reads. Reduces friction but doesn’t eliminate it.
  • Deep dive: Anatomy of the .claude/ Folder

The cycle

Scope

  • Define boundaries before any code. “What are the edge cases?” “How does this interact with X?”
  • You know what you want. The agent needs you to articulate it with precision.
  • Spend time here. Scoping takes longer than the implementation. That’s the point.
  • Never inject new ideas mid-task. Finish. Commit. Clear context. Start fresh.
  • Claude Code has a planning mode — Shift-Tab to switch. This is where I spend most of my time.

Plan

Every plan covers five things:

  1. What to build — scoped tight, one clear deliverable
  2. What not to touch — explicit boundaries, no “improving” nearby code
  3. How to prove it works — define the verification step upfront
  4. Constraints — which libraries, patterns, and conventions to follow
  5. Definition of done — tests passing, linter clean, diff under 200 lines

Bad plan: “Add user notifications.” Good plan: “Add an email notification when an appointment is confirmed. Use the existing EmailService. Don’t modify the API layer. Run the integration suite. Show me the output.”

Execute

  • Hand the agent the plan. Let it work.
  • Use skills to sequence steps correctly (e.g. coding, testing skills)
  • If you’re correcting mid-task, go back to scope — the approach needs rethinking.
  • Large refactors work as a sequence of small, verified steps.

Verify

The form varies. The discipline doesn’t.

Never trust the agent’s claim. Trust output.

  • New feature → spin up the full stack, walk through the user flow
  • Bug fix → run the integration suite before and after, show both
  • API change → hit the endpoint with real payloads, show request and response
  • Refactor → run the full test suite, then smoke test affected flows
  • Frontend → start the dev server, load the page, describe what renders

When the agent says “done” — say “show me the test output.”

Use skills to verify: code review, security, QA. Skills enforce consistency across reviews.

Wrap-up

  • Tests pass. Linter clean. Diff reviewable. Commit and push.
  • Before the next task: what went wrong? What took too long?
  • Add new rules to CLAUDE.md or improve skills. Every answer compounds.
  • Clear context. Start fresh. One task, one session.

Discipline: context hygiene

  • Don’t mix unrelated tasks in one session. The “kitchen-sink session” kills quality.
  • For long tasks: write a handoff doc (goal, progress, next steps) before starting a new session

Feedback loops

Situation Where to go
Tests fail Back to execute. The plan is fine. The code needs fixing.
Stuck in a debug loop Stop. Back to scope. The approach needs rethinking.
Mistake found Add a rule to CLAUDE.md. The system gets better.
Task complete Commit. Clear context. Start fresh at scope.

Common anti-patterns

Problem Fix
Agent builds the wrong thing Tighter scoping. Constrain the approach.
Infinite debug loop (fix → fail → fix → fail) Stop the agent. Research root cause. Go back to scope.
2000-line diff you can’t review Break the plan into smaller pieces.
Agent “improves” code you didn’t ask about Add “what not to touch” to the plan.
Quality degrades mid-session Context pollution. Clear and start fresh.
Agent claims it works but didn’t test Add verification rules to CLAUDE.md. Demand output.
Fix introduces a new bug Commit before you debug. Always.
Agent ignores or conflicts on rules Check for conflicting instructions. Simplify. Review in long sessions.
Every repo needs fresh setup The tax of Stage 3. Reuse what you can. Accept the rest.
Messy PR — mixed features, wrong branch Tighter scoping. Smaller changes. One task, one session.