Prompt
“The quality of the answer is determined by the quality of the question.” — proverb
Understand This First
- Model – the prompt is addressed to a model.
Context
At the agentic level, a prompt is the instruction set given to a model to steer its behavior. Every interaction with an AI agent begins with a prompt, whether it’s a single sentence typed into a chat interface or a carefully structured system message assembled by an agentic harness.
Prompts are the primary interface between human intent and model behavior. They occupy a role analogous to requirements in traditional software development: they describe what you want, and the quality of the result depends heavily on how clearly and completely you describe it.
Problem
How do you instruct a model to produce the output you actually want, rather than the output it defaults to?
Models are eager to please and will produce something for almost any input. The challenge isn’t getting output; it’s getting the right output. A vague prompt produces generic results. An overly specific prompt may constrain the model in ways that prevent it from contributing its best work. Finding the right level of guidance is a skill that develops with practice.
Forces
- Vagueness gives the model too much freedom, leading to generic or off-target results.
- Over-specification removes the model’s ability to contribute insight or suggest better approaches.
- Implicit assumptions in the prompt lead to mismatches between what you meant and what the model infers.
- Context limits mean you can’t include everything relevant. You must choose what to include and what to omit.
Solution
Write prompts that communicate intent, constraints, and context, in that order of importance.
Lead with intent. State what you want to accomplish, not just what you want the model to do. “Help me handle file upload errors gracefully so users always know what went wrong” gives the model more to work with than “add error handling to the upload function.”
State constraints explicitly. If you want Python 3.11, say so. If you want no external dependencies, say so. If the function must be pure (no side effects), say so. Models default to the most common patterns from their training data, which may not match your project’s conventions.
Provide context. Include relevant code, type definitions, project conventions, or examples of the style you want. The model works within its context window. Anything not in that window doesn’t exist for the model.
Specify the output format when it matters. “Return only the function, no explanation” or “explain your reasoning before writing code” produce very different interactions.
Prompt quality improves dramatically when combined with context engineering, the deliberate management of what the model sees. A well-crafted prompt in a well-curated context is far more effective than a perfect prompt in a barren one.
When a model produces disappointing results, resist the urge to blame the model. Instead, look at your prompt: Was the intent clear? Were constraints stated? Was enough context provided? In most cases, the prompt is the lever with the highest return on adjustment.
How It Plays Out
A developer types: “Write a function to parse dates.” The model produces a JavaScript function that parses a specific date format using Date.parse(). The developer wanted a Rust function that handles ISO 8601, RFC 2822, and several custom formats. Every unstated assumption (language, format, error handling) was filled in by the model’s defaults.
The developer rewrites: “Write a Rust function that parses date strings. It should handle ISO 8601, RFC 2822, and the format ‘MMM DD, YYYY’. Return a chrono::NaiveDate on success or a descriptive error. No external crates beyond chrono.” The model produces exactly what was needed on the first try.
A team discovers that starting prompts with “You are an expert in…” followed by a domain description consistently produces more detailed and accurate responses than bare questions. They aren’t giving the model new knowledge. They’re activating the relevant portion of what it already knows by framing the conversation context.
“Write a Rust function that validates email addresses according to RFC 5321. Accept the local part and domain as separate &str parameters. Return Result<(), ValidationError> with descriptive error variants. No external crates.”
Consequences
Good prompts save time by reducing the number of iterations needed to reach a useful result. They produce code that’s closer to your project’s style and conventions. They help the model avoid its default biases toward the most common patterns in its training data.
The cost is the effort of thinking before typing. Writing a good prompt requires clarifying your own intent, which, like writing good requirements, often reveals that your thinking was less precise than you assumed. This is a feature, not a bug: the discipline of prompting well improves the quality of your own reasoning.
Related Patterns
- Depends on: Model — the prompt is addressed to a model.
- Uses: Context Window — the prompt occupies part of the finite context window.
- Refined by: Context Engineering — context engineering is the systematic practice of optimizing what goes into the prompt.
- Refined by: Instruction File — instruction files are durable prompts that persist across sessions.
- Enables: Agent — agents are built on sequences of prompts and model responses.