Threat Model
“If you don’t know what you’re defending against, you can’t know whether your defenses work.” — Adam Shostack
Context
This is a tactical pattern, and it belongs at the very start of security thinking. Before you can decide what to protect or how, you need a structured picture of your risks. A threat model is that picture.
In agentic coding, threat modeling applies to both the software you’re building and the development process itself. When an AI agent has access to your codebase, your shell, and your deployment credentials, the threat model for your development environment has changed. That’s worth thinking through explicitly.
Problem
Security work without a threat model is guesswork. Teams either protect everything equally (spending enormous effort on low-risk areas) or they protect whatever feels scary, leaving real risks unaddressed. How do you decide where to focus your limited security effort?
Forces
- You can’t defend against everything equally. Resources and attention are finite.
- Threats evolve as the system changes, so a model that never gets updated becomes misleading.
- Different stakeholders see different threats as important, which makes prioritization political as well as technical.
- Overly formal threat modeling feels heavy and gets skipped. Overly casual thinking misses real risks.
Solution
Build a structured description that answers four questions: What are you building? What can go wrong? What are you going to do about it? Did you do a good enough job? This is the core of most threat modeling frameworks, including Microsoft’s STRIDE and Adam Shostack’s “Four Question Frame.”
Start by identifying the assets worth protecting: user data, credentials, system availability, business logic. Then identify the actors who might threaten those assets: external attackers, malicious insiders, compromised dependencies, and (in agentic workflows) the AI agent itself when it processes untrusted input. Map the attack surface, every place where those actors can interact with your system. For each path, ask what could go wrong and how bad it would be.
You don’t need a hundred-page document. A threat model can be a whiteboard sketch, a markdown file, or a conversation. What matters is that the thinking happens out loud rather than staying as vague unease.
How It Plays Out
A team building a web application sits down for an hour and sketches their system on a whiteboard: a browser client, an API server, a database, and a third-party payment provider. They draw trust boundaries. The browser is untrusted, the payment provider is semi-trusted, the database is internal. They walk each boundary and ask: what crosses here, and what could an attacker do? They discover that their API accepts file uploads with no size limit, that their payment callback URL has no signature verification, and that their database connection string is hardcoded in source. Three concrete findings in one hour.
When directing an AI agent to build a new feature, ask it to enumerate the trust boundaries and potential threats before writing code. Agents are good at systematic enumeration, and this makes security thinking part of the development conversation rather than something you bolt on later.
A developer using an agentic coding tool realizes the agent can read environment variables, execute arbitrary shell commands, and push to git. The threat model for their dev setup now includes a new question: what if the agent processes a malicious file and gets tricked into running harmful commands? This leads them to configure a sandbox and restrict which tools the agent can access.
“Before building this feature, draw the trust boundaries for the system: which inputs are untrusted, which services are external, and where data crosses from one trust level to another. List the threats at each boundary.”
Consequences
A threat model gives you a rational basis for security decisions. Instead of “we should probably encrypt that,” you can say “our threat model identifies data exfiltration by a compromised dependency as a high risk, so we encrypt at rest and restrict network access.” It makes security spending justifiable and reviewable.
The cost is maintenance. A model created at launch and never revisited will miss new features, new integrations, and new attack techniques. The model also can’t capture threats you’ve never imagined. It reduces surprise but doesn’t eliminate it. Treat it as a living document, revisited whenever the system’s attack surface changes significantly.
Related Patterns
- Enables: Attack Surface. Threat modeling identifies where the attack surface lies.
- Enables: Trust Boundary. The model explicitly maps where trust changes.
- Uses: Blast Radius. Understanding blast radius helps prioritize threats.
- Refined by: Vulnerability. Specific vulnerabilities are instances of modeled threats.