Overview
After an AI agent implements a ticket, Draft runs a configurable verification pipeline to ensure the changes are correct before human review.Configuration
Define verification commands indraft.yaml:
How It Works
- A ticket moves to
VERIFYINGafter successful execution - Each command runs sequentially in the ticket’s isolated worktree
- Commands stop on first failure — remaining commands are skipped
- Evidence is captured for every command (stdout, stderr, exit code)
Outcomes
| Result | Ticket State | Next Step |
|---|---|---|
| All commands pass | NEEDS_HUMAN | Human reviews the changes |
| Any command fails | BLOCKED | Agent can retry or human intervenes |
Evidence
Every verification command produces anEvidence record containing:
- stdout — Standard output captured to a file
- stderr — Standard error captured to a file
- exit code — Command return code
- command — The exact command that was run
Viewing Evidence
In the UI, expand a ticket’s detail panel to see evidence for each command:- Green checkmark = passed
- Red X = failed (with exit code)
Evidence API
Best Practices
Keep commands fast
Keep commands fast
Verification runs on every iteration. Prefer targeted test suites over full test runs.
Include type checking
Include type checking
AI-generated code can have type errors. Add
tsc --noEmit or equivalent.Use linting
Use linting
Catch formatting and style issues automatically with
ruff check, eslint, etc.Order matters
Order matters
Put fast-failing commands first (lint before tests) to get quicker feedback.