Concept 05
Prompt Engineering Patterns
Seven reusable techniques for writing effective phase templates. Tool-agnostic, model-agnostic. The building blocks of every good prompt.
"Prompt engineering patterns are like design patterns in software: named solutions to recurring problems."
Role Assignment is the Strategy pattern. Structured Output is the Interface pattern. Learn them once, apply them to any agent stack.
Anatomy of a prompt template
Role assignment
Assign a specific professional persona at the start of every prompt. "You are a Software Architect." "You are a QA Engineer." This constrains the agent's behavior to that role's expectations. A QA engineer focuses on edge cases; a software architect focuses on structure.
Before
"Analyze this code and suggest improvements."
Vague, unfocused. The agent tries to do everything.
After
"You are a Senior Security Engineer. Analyze this code for authentication vulnerabilities, injection risks, and data exposure. Ignore style and performance."
Focused, bounded. The agent knows its lane.
Pattern: always place role assignment as the first section. Include what the role IS responsible for and what it is NOT.
Structured output contracts
Define the exact output format in the prompt. Downstream consumers (other agents or humans) depend on predictable structure. If the plan artifact is free-form text one time and a numbered list the next, the Build template's ${plan_artifact} substitution becomes unreliable.
// JSON contract: output must match this schema
{
"verdict": "pass | fail",
"issues": [
{
"severity": "blocker | tech_debt | skippable",
"file": "string",
"line": number,
"description": "string",
"suggested_fix": "string"
}
]
}
This is how artifact chaining stays reliable. When the output contract is explicit, every workflow run produces the same artifact shape.
Anti-scope-creep instructions
Agents are eager to help. Given the Plan prompt, an agent might start implementing. Given the Review prompt, it might start fixing bugs. Explicit boundaries prevent this. Every prompt template should have a "What we are NOT doing" section.
## What We Are NOT Doing
- Do NOT modify files outside the plan scope
- Do NOT refactor code that is not part of this task
- Do NOT add features not requested in the issue
- Do NOT fix pre-existing bugs you encounter
- If you find issues outside scope, NOTE them in a
"Future Work" section -- do not act on them
This pattern works hand-in-hand with tool permissions. Constraints in the prompt tell the agent what it should not do. Permissions enforce what it cannot do.
Conditional documentation loading
Context windows are finite. Loading irrelevant files wastes tokens and introduces noise. Tell agents exactly which files to read, and which to ignore.
## Required Reading
Read ONLY the following files before proceeding:
- src/auth/tokens.py (module being modified)
- src/auth/session.py (related dependency)
- tests/auth/test_tokens.py (existing tests)
## Do NOT Read
- Any files outside src/auth/ and tests/auth/
- Configuration files unless referenced in the plan
Advanced: use the plan artifact to dynamically determine which files to load. The Plan phase identifies target files; the Build template uses those as the conditional loading list.
Template variable substitution
Use ${variable} syntax to inject dynamic context into prompts. The workflow engine substitutes these before sending to the agent. Templates are reusable across tasks: the variables change, the structure stays the same.
Use descriptive names
${plan_artifact} not ${input}
Document what each variable contains
include expected format (markdown, JSON, string) in template comments
Use fallback values for optional variables
${branch_name:-main} defaults to "main" if unset
Never nest variables
${${phase}_artifact} is fragile. Use explicit names like ${plan_artifact}
file:line precision references
Always reference specific file paths and line numbers in prompts and outputs. Precision dramatically improves execution accuracy. Vague references lead to the agent searching (wasting tokens), guessing (wrong file), or modifying the wrong code.
Vague
"Update the auth module to handle token refresh."
Precise
"Replace the token refresh logic at src/auth/tokens.py:112-130. Update the expiry check at src/auth/session.py:55."
Require the agent to produce file:line references in its own output so downstream phases have the same precision.
Issue-type routing
A feature needs user stories and acceptance criteria. A bug needs root cause analysis. A patch needs minimal-scope constraints. One-size-fits-all prompts produce mediocre results across the board. Use different templates for different issue types.
## Feature Plan Template
While [context], when [trigger],
the system shall [action] so that [outcome].
User stories:
- As a [role], I want [capability] so that [benefit]
Acceptance criteria:
- Given [state], when [action], then [result]
## Bug Fix Template
### Symptoms
${issue_description}
### Root Cause Analysis
1. Reproduce the bug (identify the trigger)
2. Trace the execution path
3. Identify the root cause (not just the symptom)
4. Propose the minimal fix
### Constraints
- Fix ONLY the root cause
- Do NOT refactor surrounding code
- Regression test: fails before fix, passes after
## Patch Template
### Scope
This is a minimal patch. Maximum 50 lines changed.
### Rules
- Change the fewest files possible
- No new dependencies
- No refactoring
- Must pass all existing tests
## Task Template
### Objective
${issue_description}
### Implementation
Follow standard implementation patterns.
Reference existing code in the same module for style.
### Deliverables
- Code changes with tests
- Updated documentation if public API changes