What Is Agentic Coding?
In early 2025, Kenta Naruse, a machine learning engineer at Rakuten, gave a coding agent a task: implement a specific activation vector extraction method inside vLLM, an open-source inference library spanning 12.5 million lines of code across multiple languages. He typed the instructions, hit enter, and watched. The agent read the codebase, identified the files it needed to change, wrote the implementation, ran the test suite, fixed what failed, and kept going. Seven hours later, it produced a working implementation with 99.9% numerical accuracy against the reference method. Naruse didn’t write a single line of code during those seven hours. He provided occasional guidance. The agent did the building.
Two years earlier, that task would have required weeks of manual work: reading unfamiliar code across multiple modules, tracing data flows, writing the implementation, and debugging until the numbers matched. Two years before that, no AI tool could have attempted it at all.
What Naruse did that day has a name: agentic coding.
What Makes It “Agentic”
The word comes from agency, the capacity to act toward a goal on your own. An agent doesn’t wait for you to type each line. It accepts a goal, breaks it into steps, and works through them: reading files, running commands, writing code, executing tests, fixing failures, repeating until the task is done or it gets stuck. It uses tools to interact with the real development environment, not just generate text in a chat window.
Three capabilities converged to make this possible.
Language models got good enough at reasoning about code structure, inferring intent from short descriptions, and recovering from their own errors.
Tool use became standard. Models could now run terminal commands, read files, search a codebase, and fold the results into their next action. This is what lets an agent operate in a real development environment rather than producing text you have to copy and paste yourself.
Context windows grew large enough to hold meaningful chunks of a codebase. An agent that can see only 10 lines can’t reason about a 2,000-line module. One that can hold hundreds of thousands of tokens can.
The result: the model moved from assistant to participant. Earlier AI coding tools responded to what you were typing. An agent responds to what you’re trying to accomplish.
The Spectrum
AI coding assistance didn’t jump straight to agents. It arrived in layers, and each layer changed what the tool could do and what it asked of you.
Autocomplete (2021) predicts the next token based on what’s in your editor. It has no concept of your project’s goals and no way to recover from its own mistakes.
Chat (2023) lets you ask questions and get answers in a conversation. More flexible, but still reactive: it waits for you to drive every turn.
Agents (2025) accept a goal and pursue it across multiple steps. They read your codebase, plan changes, make edits, run tests, and iterate. You describe what you want. The agent figures out how to get there. When it hits a problem, it can back up and try a different approach without waiting for you to intervene.
These layers coexist. Developers who use agents still reach for autocomplete when they’re writing code by hand. What changes is the default mode of work: for tasks with a clear objective, directing an agent replaces typing the solution yourself. The shift isn’t about which tool you open. It’s about whether you’re producing code or producing instructions that produce code.
What You’re Actually Doing
If the agent writes the code, what do you do? Your job doesn’t disappear. It shifts. Three activities take the place of manual coding, and each one is a skill worth developing.
Writing prompts. A prompt is the instruction that tells the agent what to build. “Add input validation to the registration form” is a start. “Validate email format, enforce minimum password length of 12 characters, reject empty fields, and write unit tests for every case” gets better results. Precision in the prompt translates directly to quality in the output. Learning what to specify (and what to leave to the agent’s judgment) is a skill that develops with practice.
Reviewing output. Agents misread requirements, pick wrong approaches, and write code that passes tests but misses the point. You read the diff the way you’d review a colleague’s pull request: does the logic match the intent? Are edge cases handled? Was anything introduced that shouldn’t be there? Keeping a human in the loop isn’t a formality; it’s how mistakes get stopped before they ship.
Verifying the work. Review catches what looks wrong. Verification catches what is wrong. You run the tests, check the behavior against the spec, and confirm that the agent’s solution holds up beyond the happy path. The verification loop is the mechanism that maintains quality when you aren’t writing every line yourself.
Start with tasks that have a clear definition of done: a test suite that should pass, a function with a known interface, a format that can be validated. Agents perform better when they can check their own work.
Where This Book Picks Up
The Welcome page described the shift: code is free, the bottleneck moved from typing to thinking, and the knowledge behind good software matters more than ever. This chapter showed you what the shift looks like in practice. The rest of the book gives you the vocabulary to work within it.
That vocabulary is organized as a pattern language. Each entry names one concept that keeps coming up when people direct agents to build software: Agent, Prompt, Context Window, Tool, Verification Loop, Steering Loop, and dozens more. Each entry describes the problem, the forces at play, and a concrete solution. The entries link to each other, forming a web you can navigate in any direction.
Start with whatever concept you need most, or begin at Model for a foundation. If you want a guided path, How to Read This Book offers five learning tracks tailored to different backgrounds.
If the idea of a “pattern language” is new to you, What Are Design Patterns? explains the tradition this book builds on and why naming these concepts matters.