Skip to main content
Overview Intent Plan Build Test Review Document Deploy Monitor
00 optional
Requirements Analyst

Intent

Clarify first, plan second. Interactive when needed, skippable when not.

Read Search Glob Git log/diff Execute Write Git commit

Why this phase exists

The most common challenge in agentic planning is input that varies in detail and structure. The Plan phase assumes someone already defined the requirement clearly: issue type, scope, acceptance criteria. When that assumption is wrong, Plan produces specs that miss the point, Build implements the wrong thing, and the entire workflow loops back or escalates.

Intent exists to normalize input quality. Whether the input is a detailed JIRA ticket or "fix the login thing," Intent produces the same structured specification for Plan. It is the only interactive phase in the workflow; it can ask the user clarifying questions grounded in codebase evidence.

It goes first because it sets the quality bar for everything downstream. Every phase after Plan inherits the quality of what goes into Plan. Better input to Plan means fewer retries, fewer escalations, and fewer wasted workflow runs.

Intent is optional. Skip it when the input is already well-defined. Use it when it is not.

When to use Intent

  • Requirements come from different sources with varying quality (tickets, Slack messages, verbal requests)
  • The request is vague, ambiguous, or missing acceptance criteria
  • The issue was generated automatically (Monitor phase, production alerts)
  • Cross-team requests where domain context is missing
  • The team operates at L3-L4 and needs fully autonomous issue scoping

When to skip Intent

  • The ticket has clear acceptance criteria and specific scope
  • Requirements come from a mature product org with detailed specs
  • It is a hotfix or docs-only change
  • The team is at L1 and the human is performing the Intent role manually

Methodology

1

Receive raw input

Accept whatever is provided: a ticket, a message, a description, a monitor-generated alert. Make no assumptions about quality or completeness.

2

Research codebase context

Spawn subagents to find files related to the request, understand the affected domain, and discover existing patterns and constraints. This is the same parallel research approach Plan uses, but focused on understanding the requirement rather than planning the solution.

3

Identify gaps

Determine what is missing, ambiguous, or assumed. Are there multiple valid interpretations? Is the scope open-ended? Would Plan have to guess at key decisions?

4

Ask clarifying questions (interactive)

Present findings and ask the user targeted questions. Only ask what codebase research cannot answer. Ground every question in evidence: "The existing /orders endpoint uses cursor-based pagination. Should /users match that?" Wait for answers. Verify answers against the codebase; do not accept claims at face value.

5

Classify issue type

Based on findings and clarification, determine: feature, bug, task, or patch.

6

Produce intent specification

Generate the structured output that Plan will consume via $intent_artifact.

If the input is already well-defined (has acceptance criteria, clear scope, identified issue type), skip steps 3-4 and produce the specification directly.

Inputs and outputs

Inputs

  • Raw request (any format, any quality level)
  • Repository context (agentic layer config, project docs)

Source: External (user, ticket system, Slack, or Monitor phase output)

Outputs

  • Intent specification (Markdown)
  • Problem statement, issue type, user stories or reproduction steps
  • Acceptance criteria, scope boundaries, codebase context
  • Assumptions stated and verified

Consumed by: Plan phase as $intent_artifact

Intent output template
# Intent: ${title}

## Problem / Goal
[Clear statement of what needs to happen and why]

## Issue Type
${issue_type}  # feature | bug | task | patch

## User Stories / Reproduction Steps

### If feature:
- When [stimulus], the system shall [response] (EARS format)

### If bug:
- Steps to reproduce
- Expected behavior
- Actual behavior

## Acceptance Criteria
- [ ] Specific, testable criterion
- [ ] Another criterion

## Scope
### In Scope
- What is included in this work

### Out of Scope
- What is explicitly excluded

## Codebase Context
- Relevant files and areas discovered during research
- Current patterns that apply
- Constraints discovered

## Assumptions
- Assumptions made and verified during clarification
- Assumptions that need Plan-phase validation

Agent pattern

Single Agent with Optional Research Fan-Out

Simple requests (clear but unstructured): single agent. Read the input, structure it, produce the spec. No research needed.

Ambiguous requests (missing context): fan out to 2-3 research subagents (Locator, Analyzer) to understand the codebase area before asking questions.

Complex requests (multi-system, cross-domain): full research fan-out plus multiple rounds of clarification.

Intent is lighter than Plan. Most requests need a single agent with one round of questions, not the full 3-subagent parallel pattern.

Tool permissions

Read Search Glob Git log/diff Execute Write Git commit

Intent has the same read-only permissions as Plan. The difference is behavioral, not mechanical: Intent researches to understand the requirement. Plan researches to understand the solution.

Execute is denied for the same reason as Plan: no side effects during clarification.

What "interactive" means: Intent is the only phase that pauses for user input. The agent presents findings and questions, waits for answers, and verifies those answers before producing the specification. This is not a tool permission; it is a behavioral expectation defined in the prompt template.

Prompt template

# Intent Phase -- Requirements Analyst

## Role
You are a Requirements Analyst. Your job is to take a raw
request and produce a well-defined intent specification
that a Software Architect (Plan phase) can act on.

You do NOT write code. You do NOT create plans. You do NOT
modify files. You clarify requirements and produce a
normalized specification.

## Context
Request: ${raw_request}
Repository: ${repo_name}
Agentic Layer Config: ${agentic_layer_path}

## Methodology

### Step 1: Research
Before asking questions, research the codebase:
1. Use Glob to find files related to the request area
2. Read relevant files to understand current patterns
3. Search for existing implementations the request relates to
4. Check recent git history for context

### Step 2: Identify Gaps
Determine what is missing from the request:
- Is the scope defined or open-ended?
- Are there acceptance criteria?
- Could this be interpreted multiple ways?
- What would Plan have to guess at?

### Step 3: Clarify (if needed)
If gaps exist, present findings and ask questions:
- Only ask what research could not answer
- Ground questions in codebase evidence
- Be specific: "Should X work like Y?" not "What do you want?"
- Wait for answers before proceeding
- Verify answers against the codebase

If the request is already well-defined, skip to Step 4.

### Step 4: Produce Specification
Classify the issue type and produce the intent specification.

## Constraints
- Read-only. Do not modify any files.
- Research before asking. Never ask questions you could
  answer by reading the codebase.
- Ask only what matters. Every question should change the
  specification. Do not ask about preferences or style.
- Do not plan. Your job ends at the specification. Do not
  propose solutions, architectures, or file changes.
- Do not expand scope. If the request says "fix login timeout",
  the intent is about login timeout. Not the auth module.

## Output Format
Produce an intent specification with these exact sections:

1. Problem / Goal -- what needs to happen and why
2. Issue Type -- feature | bug | task | patch
3. User Stories (EARS format) or Reproduction Steps
4. Acceptance Criteria -- checkboxes, testable conditions
5. Scope -- what is in, what is explicitly out
6. Codebase Context -- relevant files and patterns found
7. Assumptions -- stated, verified, or flagged for Plan

## Quality Checks (Self-Validation)
Before submitting your specification, verify:
- [ ] The problem statement is unambiguous
- [ ] Issue type is classified
- [ ] Acceptance criteria are testable (not vague)
- [ ] Scope boundaries are explicit
- [ ] All assumptions are stated
- [ ] Plan could act on this without asking more questions
Role Assignment Template Variables Research Protocol Anti-Scope-Creep Self-Validation

Best practices

Do

  • Research the codebase before asking the user anything
  • Ground every question in codebase evidence
  • Produce the spec directly if the input is already well-defined
  • State assumptions explicitly rather than hiding them
  • Classify the issue type based on evidence, not the user's label

Don't

  • Ask questions the codebase can answer
  • Start planning or proposing solutions
  • Ask more than one round of questions for simple requests
  • Expand scope beyond what the user described
  • Skip Intent for monitor-generated issues (they need it most)

Nuances

The "already well-defined" shortcut

Not every request needs full clarification. If the input has a clear problem statement, testable acceptance criteria, and explicit scope, Intent should produce the specification directly without asking questions. The methodology step says "if the input is already well-defined, skip steps 3-4." This keeps Intent from being a bottleneck on well-structured tickets.

Intent subsumes classification

The existing issue-type routing (feature/bug/task/patch) can be performed by Intent rather than requiring a separate classification step. Intent has enough context from its research to determine the correct type. Teams that use Intent can skip the separate classify step.

Monitor-generated issues need Intent most

When the Monitor phase generates issues from production data (error spikes, latency regressions, anomaly detection), those issues are raw signals, not refined requirements. Intent turns "error rate spike on /checkout endpoint" into a well-scoped bug specification with reproduction context, affected files, and acceptance criteria. Without Intent, Plan has to guess at what the monitor signal means.

Intent and the feedback loops

Intent does not have its own retry loop. If Plan finds the intent specification insufficient, the current design is to escalate to human review rather than loop back. This is intentional: looping between Intent and Plan risks infinite refinement cycles. In practice, a well-constructed Intent spec rarely needs revision. If it does, the human can refine and re-run.

Context-window efficiency

Intent is lighter than Plan. Most requests need a single agent reading 5-10 files and asking 2-3 questions. Reserve the full parallel subagent fan-out for genuinely ambiguous, cross-system requests. For simple "clear but unstructured" inputs, a single-agent pass is sufficient.