Agentic CodingGuide

Before the Loop

Infrastructure and questioning discipline — the prerequisites for a productive feedback loop.

The core loop

At its simplest, an LLM agent is autocomplete in a loop — tool calls, one after another, guided by what it sees in its context window. What makes this loop productive is feedback: signals at every layer that tell the agent whether it's on track, so it can course-correct before going too far in the wrong direction.

Everything in this guide is about making that feedback loop tighter. This page covers the prerequisites: the infrastructure that provides the signals, and the questioning discipline that prevents the loop from starting on the wrong foot.

Three categories of agent infrastructure

Primitives & patterns

Building blocks agents reach for instead of inventing their own.

  • Co-located code — When the thing an agent needs is next to the thing it's editing, it finds it. Three directories away behind an abstraction, it doesn't.
  • Usage patterns — How you encode "this is how we do it here." An NPM script, a README example, a reference implementation the agent can copy from.

Guardrails

Feedback mechanisms that tell agents whether they're on track.

  • Rules (proactive) — Shape behavior before the agent acts.
  • Hooks (reactive) — Edits to specific files trigger a tool or block the change.
  • Tests — If the agent can't verify its own work, you're the bottleneck.

Enablers

Let agents run longer without a human in the loop.

  • Skills — Package up repeated work into reusable capabilities.
  • MCPs — Connect agents to systems your team already lives in — Slack, Datadog, Linear.

Is your environment ready?

Run an agent and watch what happens:

  • Can it start your local environment?
  • Can it run tests and make sense of the output?
  • Can it pull external context? (Logs, issues, tasks, version history.)
  • Can it verify its own changes? (Tests, type checks, a dev server, screenshots from a headless browser.)

Each "no" is a gap in the feedback loop. The agent will guess where it can't verify.

What's next

Once the environment is in shape, give agents their own machines — each agent on its own isolated VM with a full dev environment, able to verify its own work before handing back a PR with screenshots, videos, and logs.


Source: Eric, on agentic engineering (2026-04-08)

Challenge assumptions before the loop starts

LLMs are trained to be agreeable. They'll confirm your assumptions rather than challenge them. The fix is to build a deliberate challenge step before execution — surface the unknowns before the loop begins burning context on a flawed premise.

The /grill-me pattern: before the agent proceeds with a plan, it interrogates your request — asking the hard questions a good senior engineer would ask.

  • "You said 'users' — do you mean all users, or just active ones?"
  • "This will overwrite existing data. Is that intentional?"
  • "This assumes the schema hasn't changed — should I verify first?"

The cost of flushing out a bad assumption is one question. The cost of discovering it later is a rewrite.

.claude/skills/grill-me.md
---
name: grill-me
description: Interview the user relentlessly about a plan or design until reaching shared understanding, resolving each branch of the decision tree. Use when user wants to stress-test a plan, get grilled on their design, or mentions "grill me".
---

Interview me relentlessly about every aspect of this plan until
we reach a shared understanding. Walk down each branch of the design
tree resolving dependencies between decisions one by one.

If a question can be answered by exploring the codebase, explore
the codebase instead.

For each question, provide your recommended answer.

The planning step isn't just about making a plan. It's about discovering what you don't know yet.

On this page