"A computer can never be held accountable, therefore a computer must never make a management decision."
— IBM, 1979
Your name is on the spec. The agent's name is on the code. That's the right order.
AI agents build exactly what you tell them. The gap between what you meant and what you wrote is where bugs, rework, and hallucinated features come from.
Sutra closes that gap — a spec authoring canvas that gives your agents testable acceptance criteria, edge cases, API contracts, and dependency maps before a single line of code is written.
Sutra is a story mapping tool built for teams using AI coding agents. Traditional tickets are written for humans — thin, ambiguous, and leaving implementation decisions to the developer. AI agents need more: testable acceptance criteria, edge cases, API contracts, test type declarations, and dependency maps that tell the agent what it can call autonomously vs what needs a human in the loop.
Sutra gives you a structured canvas to build that context, sync it with Jira, and export it in agent-ready formats.
The mental model: Sutra is your spec authoring environment. Jira is your delivery tracking system. Sutra doesn't replace Jira — it completes it. Jira is the source of truth for status, sprint, and assignee. Sutra is the source of truth for specification.
A major functional area or user journey. Contains flows. Maps to a Jira Epic. Has description, NFRs, and optional screen map.
A user journey step within an epic. Contains stories. Has description and structured API contracts. A Sutra concept — not a Jira issue type.
An atomic requirement. Maps to a Jira issue. Has AC, edge cases, test type, and dependencies. The unit the agent implements.
Start from scratch, import from Jira, or generate from a document using the AI template. Your map is the spec authoring surface.
Add acceptance criteria, edge cases, test types, dependencies. Add API contracts to flows. Set NFRs at epic or global level.
If you're using Jira, push enriched stories to create new tickets or update existing ones. Push epics with full flow and API contract context. For Jira flows see sutrasdd.app/#jira.
Export as single markdown spec or Claude Code project structure. Drop into your repo and run your agent.
You have a requirements document and want a fully populated Sutra map without manual entry.
Settings → Utilities → Download AI Map Template. Contains master prompt and full schema.
Open claude.ai. Upload the template JSON and your PRD or ICD as attachments in the same message.
"Follow the instructions in the _sutra block to generate a Sutra story map from the attached document. Return only valid JSON with no markdown fences."
Save Claude's output as a .json file. In Sutra: + New Map → Import → From File.
Claude does a good first pass but won't get everything right. Review each story — fill in edge cases, add API contracts to flows, verify test types.
You want to catch spec gaps and get an implementation plan before handing the spec to an agent. A human-in-the-loop review pass before the build starts.
Export JSON from the topbar. Full structured spec — every field, every story.
Upload to claude.ai. Prompt: "Read this Sutra story map. Identify spec gaps, ambiguous AC, and missing edge cases. Produce an implementation plan — story sequencing, dependencies, open questions. Do not generate code. Return the updated JSON with planning notes in the relevant openQuestions fields."
Claude returns a plan and updated JSON. Resolve conflicts, answer open questions, adjust story scope.
+ New Map → Import → From File. Planning notes appear as open questions on stories. Resolve them, then export the final spec for building.
Importing Claude's JSON creates a new map — it does not overwrite your existing one. Review both before deciding which to keep.
Adding features to an existing product. You don't want the agent to rewrite working code.
Click + Insert Template in the green Existing Codebase card. A structured block is appended to your engineering notes.
Set the codebase root path. List what's already built per epic. Add key files the agent should read first.
Engineering notes land verbatim in root CLAUDE.md. The agent reads this before touching any file — extends existing code instead of rewriting it.
You built a map in Sutra and want to share it with a teammate or use it on another device.
Export as JSON from the topbar. Lossless — all epics, flows, stories, NFRs, API contracts preserved. Share the file or import on another device.
Create empty map → import epics from Jira → sync each epic. Stories import with Jira keys. Flows, NFRs, and API contracts won't reconstruct as structured fields — they'll be in the description as text.
Use for Jira-native epics only. For Sutra-originated content, always use JSON.
The progress bar in the topbar scores how spec-complete your map is. Description + test type + edge cases per story each contribute. Hover to see exactly what's missing. Green = agent-ready.
Right-click any epic header, flow header, or story card for a context menu with Edit, Duplicate (stories only), Push to Jira, and Delete. Faster than hunting for the ✎ button.
On maps with multiple epics, a sticky strip appears at the bottom of the canvas with numbered epic pills. Click any pill to scroll that epic into view — essential on large maps.
Internal means the agent can call that service autonomously. External means the agent must stop and wait for a human. The most important field for agentic workflows — fill it in for every story that has a dependency.
Open Questions appear in the spec with a ⚠ marker. The agent is explicitly instructed to flag these and not guess. Use them to surface decisions that haven't been made yet.
A global NFR like "All API endpoints must require JWT" applies to every story in the entire spec. Set it once in Overview → NFRs. Injected above every epic in the export — the agent never misses it.
Export JSON regularly — it's your lossless backup. Preserves everything: flows, NFRs, API contracts, edge cases, open questions, dependency maps, release assignments, and Jira keys. Store it in your repo.
The tech stack drives test framework resolution in the exported spec. Without it, integration tests get a generic fallback message. Fill in Overview → Tech Stack before exporting.
| Shortcut | Action |
|---|---|
| Ctrl + Scroll on canvas | Zoom canvas in / out (pinch gesture on trackpad also works) |
| Pinch gesture anywhere | Zooms canvas only — browser zoom is blocked app-wide |
| Ctrl + = | Zoom in |
| Ctrl + − | Zoom out |
| Ctrl + 0 | Reset zoom to 100% |
| Escape | Close any open modal (disabled during the scripted guided tour) |
| Double-click epic label | Rename epic inline |
| Right-click any card | Context menu — Edit / Duplicate / Push / Delete |
A Sutra spec is a structured markdown document your agent can act on immediately — no interpretation required. Every export includes:
Testable conditions that define "done" per story.
Boundary conditions and failure modes the agent must handle.
Structured endpoint definitions scoped to each flow.
Non-functional rules injected into every story automatically.
What the agent resolves autonomously vs what needs a human.
Unit, integration, or E2E declared per story.
Your stack, conventions, and what's already built — prepended to every export.
Claude Code reads your project, understands context, and implements stories one by one.
Run claude in your terminal from the project root. Claude Code reads your existing codebase before doing anything.
Paste the markdown contents directly, or reference the exported file path.
Template: "Read this spec carefully. Implement the stories in order, starting with [Epic Name]. For each story: implement the feature, write the test indicated by the test type, and confirm AC is met before moving on. Ask me if anything is ambiguous."
Claude Code implements story by story. Review each before approving.
Single file vs project structure: Use "Single file" for smaller features. Use "Project structure" for Claude Code — one file per epic keeps context focused per session.
Engineering notes matter: Prepended to every export. Tell the agent about your stack, shared patterns, what's already built, and conventions it must follow.
Export after each release: Sutra stamps the engineering notes with what was built. Future exports tell the agent "these stories are done — don't rewrite them."
Don't skip edge cases: The most common agent failure mode is missing edge cases. "AC: user can divide" gets you happy path only. "AC: division by zero returns Error" changes the output entirely.
Coming Soon. :)
Last updated: March 2026
Sutra is a Chrome extension. This policy explains what data Sutra collects, how it is used, and how it is stored.
We collect nothing. Sutra does not collect, transmit, or store any personal data on external servers. There are no analytics, no tracking pixels, no third-party data sharing, and no accounts.
All data you create in Sutra — maps, stories, epics, flows, settings — is stored exclusively in your browser's local storage (chrome.storage.local) on your own device. It never leaves your machine unless you explicitly export it.
When you use Sutra's Jira integration, Sutra communicates directly from your browser to your Atlassian instance using your existing browser session cookies. Sutra does not store, proxy, or log your Jira credentials or API tokens. No Jira data passes through any external servers — the connection is entirely between your browser and your Atlassian instance.
Sutra requests the following Chrome permissions:
To save your maps and settings locally in your browser using chrome.storage.local.
To communicate with your Jira instance when you trigger a push or sync from the extension.
No other permissions are requested. Sutra does not access your browsing history, other tabs, or any data outside of the extension itself.
Because all data is stored locally, you have full control:
If this policy changes materially, it will be updated here and noted in the extension release notes. We will never introduce data collection without clearly disclosing it.
Questions? Reach out via the Chrome Web Store listing or the Sutra website at sutrasdd.app.