Feedforward
A feedforward is any control you place before the agent acts, steering it toward correct output on the first attempt.
“The cheapest bug to fix is the one you prevent.” — Michael Feathers
Also known as: Guide, Proactive Control, Steering Input
Understand This First
- Harness (Agentic) – the harness loads and orchestrates feedforward controls.
- Context Engineering – choosing which feedforward to include is a context engineering decision.
Context
At the agentic level, feedforward sits inside the harness that wraps a model. Where feedback sensors observe what an agent did and help it correct course afterward, feedforward controls shape what the agent does before it writes a single line. They reduce the need for correction by raising the odds of a good first attempt.
The idea comes from control theory, where a feedforward controller acts on known inputs rather than waiting for error signals. In agentic coding, the known inputs are your project’s architecture, conventions, constraints, and domain knowledge. The practical question: how do you get them in front of the agent at the right moment?
Problem
How do you prevent an agent from producing output that violates your project’s rules, structure, or intent, without relying entirely on after-the-fact correction?
An agent that generates code and then runs tests to find mistakes will eventually converge on a working solution. But each correction loop costs time, tokens, and context window space. Some mistakes compound: an agent that misunderstands your architecture in step one builds every subsequent step on a flawed foundation. Catching that at the end costs far more than preventing it at the start.
Forces
- Agents lack implicit knowledge. A human developer absorbs project conventions over weeks. An agent starts fresh every session and knows only what you tell it.
- Correction is expensive. Each feedback loop consumes tokens, time, and context. Multiple rounds of “try, fail, fix” can exhaust the context window before the task is done.
- Too many constraints overwhelm. Flooding the agent with every rule and guideline wastes context space and can confuse the model about what matters most for the current task.
- Conventions change. Feedforward controls must stay current or they actively mislead.
Solution
Place the right information in the agent’s path before it acts. Feedforward controls come in two forms: documents that the agent reads and computational checks that run during generation.
Documents as feedforward. Instruction files, specifications, architecture decision records, coding conventions, and domain model definitions all serve as feedforward when loaded into context before the agent begins work. The harness typically loads project-level instruction files automatically. Task-specific feedforward requires you to point the agent at the right documents: “Read the auth module’s design doc before changing anything in that directory.”
Computational feedforward. Type systems, schema validators, linter configurations, and module boundary rules can run during or immediately after generation, catching structural errors before the agent moves to the next step. These are deterministic, fast, and cheap. A type checker that flags an incompatible return type during generation costs far less than a test failure three steps later.
Choosing what to include matters as much as including it. Not every convention belongs in every session. Match feedforward to scope: project-wide conventions load automatically via instruction files; task-specific constraints belong in the prompt or in documents the agent reads on demand. Fowler and Boeckeler draw the distinction between persistent guides (always present) and situational guides (loaded for specific tasks).
When an agent makes the same mistake twice, treat it as a feedforward gap. Add an instruction file rule, a linter check, or a prompt constraint so the mistake becomes less likely on the next attempt. Over time, your feedforward controls encode your project’s accumulated judgment.
How It Plays Out
A team maintains a TypeScript monorepo with strict module boundaries: the payments module must never import from users directly. They encode this rule in two places: the project’s instruction file (so the agent knows the constraint) and an ESLint rule (so the build enforces it). When an agent works on a payment feature, it reads the instruction file and respects the boundary. If it slips, the linter flags the cross-module import before tests run. The agent reads the lint error, restructures its imports, and the next check passes. Two feedforward controls, one document and one computational, prevented a design violation that integration tests might never have caught.
A solo developer writes a specification for a new API endpoint before asking the agent to implement it. The spec describes the request and response shapes, the validation rules, and the error codes. The agent reads the spec, generates the implementation, and the output matches the spec on the first pass. Without the spec, the agent would have made reasonable guesses about error handling that didn’t match the developer’s intent, requiring several rounds of correction.
“Before writing any code, read CLAUDE.md and the spec in docs/api-spec.md. Follow the module boundary rules described there. The payments module must not import from users directly.”
Consequences
Good feedforward controls reduce iteration cycles and produce output that needs less correction. They encode your project’s standards in a form that works for both human and AI collaborators. Over time, a well-maintained set of feedforward controls becomes a living record of your team’s architectural decisions and coding judgment.
The cost is maintenance. Instruction files, specs, and linter rules must be written, kept current, and scoped appropriately. Stale feedforward is worse than none: an instruction file describing last quarter’s architecture sends the agent confidently in the wrong direction. Overly verbose feedforward can also backfire, consuming context window space that the agent needs for the actual task.
Related Patterns
- Depends on: Harness (Agentic) — the harness loads and orchestrates feedforward controls.
- Depends on: Context Engineering — choosing which feedforward to include is a context engineering decision.
- Uses: Instruction File — the primary vehicle for persistent feedforward.
- Uses: Specification — a spec is task-level feedforward that defines expected behavior.
- Contrasts with: Feedback Sensor — feedback sensors detect errors after the act; feedforward works before it.
- Enables: Verification Loop — verification loops consume both feedforward and feedback signals.
- Enables: Steering Loop — the steering loop connects feedforward with feedback sensors into a closed control system.
- Enables: Harnessability — codebases with strong feedforward affordances are more harnessable.
- Related: Domain Model — a domain model included in context steers the agent toward correct names and structures.
Sources
- The concept of feedforward control originates in control theory and cybernetics. I. A. Richards coined the term “feedforward” at the 8th Macy Conference on Cybernetics in 1951, though the underlying principle appears in Harold S. Black’s 1923 patent work on amplifier design. The modern discipline of feedforward control was developed through the 1970s at MIT, Georgia Tech, Stanford, and Carnegie Mellon.
- Marshall Goldsmith and Jon Katzenbach adapted the feedforward concept to management coaching in the early 1990s, reframing it as forward-looking behavioral suggestion rather than backward-looking critique. Goldsmith’s popularization of “feedforward vs. feedback” as a coaching distinction is an intellectual ancestor of the guides-vs-sensors framing used in agentic coding.
- Birgitta Boeckeler introduced the guides (feedforward) and sensors (feedback) framework for agentic harness engineering in “Harness engineering for coding agent users”, published on Martin Fowler’s blog. This article’s structure and terminology draw directly from that framework.
- OpenAI’s “Harness engineering: leveraging Codex in an agent-first world” extended the guides-and-sensors model to large-scale agent-driven development, providing additional practical patterns for feedforward and feedback controls.