Part 1: The Mental Model Gap


1. Orientation: What You Think Is Happening

What is actually happening is not programming. It is stochastic text generation with full filesystem privileges.

You ────prompt──→ [ LLM ] ────token stream──→ [ Local Execution ]
                    ↑                              ↓
                    └────reads files────────────────┘
                    └────writes files───────────────┘
                    └────executes bash──────────────┘

The agentic loop: a blind autonomous sequencer.

Claude Code is not a programmer. It is a stochastic text generator with file system mutation privileges, wrapped in an agentic loop.

The difference determines:

  • What “task complete” means
  • Who owns correctness
  • Where your review effort goes
  • When you are legally liable

2. System Classification

Mechanical Reality

Claude Code is a CLI that:

  1. Accepts natural language
  2. Sends prompts to an API
  3. Receives tool invocations
  4. Executes them locally
  5. Feeds results back

What It IS NOT

Not a deterministic compiler
Same input ≠ same output.

Not stateful
No memory of terminal context outside the window.

Not a code reviewer
Generates tests with the same blind spots as the code.

Not a system architect
Pattern matches structure; doesn’t reason about invariants.

Diagram: The Boundary of Knowledge

┌──────────────────┐
│  Claude "Knows"  │  ← Files it reads during task
└──────────────────┘  ← Patterns from training data
                    ← Nothing else

┌──────────────────┐
│  You Own         │  ← Correctness of output
└──────────────────┘  ← Security implications
                    ← Cleanup after failures

If you don’t review assuming every line is subtly wrong, you’ve misunderstood the tool.


3. The Risk of Anthropomorphism

When you assign human traits to the tool, you expose yourself to specific failure modes.

The Trap: “It understands my intent.”

The Reality: You fail to specify constraints. It builds a feature that works in isolation but breaks architectural invariants it couldn’t “see.”

The Trap: “It tested its work.”

The Reality: You trust passing tests that were generated by the same model that wrote the broken code. Tautological success.

The Trap: “It learned from my correction.”

The Reality: You assume a fix in one file propagates to its understanding of the whole system. It does not.


Next: Operational Mastery

In Part 2 (Paid), we tear down the machinery. We will look at the specific mechanics of the agentic loop, the hard limits of the context window, and the hidden cost models that turn small tasks into budget explosions.

You generally understand what it is. Now you need to understand how it works—and how it breaks.

[Continue to Part 2: Operational Mastery]