Documentation
Everything you need to integrate Mira with your AI coding workflow.
Getting Started
Mira connects stakeholders to AI coding assistants through structured feedback on builds, feature tracks that scope work into features, project context that persists your brand and technical decisions, skills that encode domain knowledge, and subagents that define specialized AI personas. All flow into your coding tool via MCP.
Get started with a single command:
npx @okmira/mcp setup
This opens a browser to authenticate, picks your org and project, and auto-configures your MCP server, API key, .mira.json, and hooks. See Setup for manual configuration options.
- Deploy a build from your AI coding tool or via git push.
- Plan — organize work with feature tracks, link to branches, and scope feedback automatically.
- Shape — stakeholders give feedback with screenshots, create skills (brand rules, design standards), define agents, and build project context through context digests.
- Fetch everything into your AI assistant via MCP — structured feedback, feature tracks, project context, installable skills, and specialized agents.
- Apply and deploy again. Status tracking closes the loop — stakeholders see their input was addressed.
Mira connects to your AI coding tool via the Model Context Protocol (MCP) — an open standard supported by 18 tools including Claude Code, Cursor, Copilot, Windsurf, Gemini CLI, Amp, and more.
Step-by-Step Guide
A complete walkthrough for first-time users — from sign-up to your first feedback loop.
1. Create your project
Sign in at okmira.ai, create an organization and project. Generate an API key on the Setup tab — you'll need it to connect your AI coding tool.
2. Connect your AI coding tool
Run npx @okmira/mcp setup to auto-configure everything, or follow the manual steps in Setup. This connects your tool to Mira via MCP so it can fetch feedback, skills, and project context.
3. Register your first build
Deploy via your AI tool (auto-detected via hooks) or set up a Vercel webhook for git-push deploys. See Webhooks and Deploy Platforms for platform-specific details.
4. Create a feature track
Organize work into features. Propose a track with a description and success criteria, approve it, then activate it with a git branch name. Builds on that branch auto-link to the track. See Feature Tracks.
5. Set up your project brief
Define your product identity: description, brand direction, tone, technical stack, competitors. AI uses this as persistent context across all features. See Project Context.
6. Invite your team
Add stakeholders with roles from the Members tab. Internal team members and external clients are handled separately — each gets their own feedback stream.
7. Review, fetch, apply
Stakeholders review versions and leave structured feedback. Developers fetch feedback via MCP, apply changes, and deploy. Status tracking closes the loop — stakeholders see their input was addressed. The cycle repeats.
Setup
Quick setup (recommended)
One command configures everything — MCP server, API key, project config, and hooks. Opens a browser to authenticate, lets you pick your org and project, then writes all config files automatically.
npx @okmira/mcp setup
Supports --upgrade to refresh configs without re-authenticating, --api-url for custom API URLs, and --no-hooks to skip hook installation. Auto-detects your installed AI coding tool and writes the correct config format.
Manual setup
If you prefer to configure things manually, follow these three steps.
1. Create a project & generate an API key
Sign in at okmira.ai, create an organization and project, then generate an API key from the project setup tab.
2. Add the MCP server
Select your AI coding tool, then follow the instructions below.
Run this in your terminal:
claude mcp add mira \ -e CC_FEEDBACK_API_URL=https://okmira.ai \ -e CC_FEEDBACK_API_KEY=ccf_YOUR_KEY_HERE \ -- npx @okmira/mcp
Takes effect immediately — no restart needed.
3. Create project config
Add a .mira.json file to your project root with your project ID:
{
"projectId": "your-project-uuid"
}Deploy Platforms
Mira's auto-register hook detects deploy URLs from 7 hosting platforms. For tools with hook support (Claude Code, Codex CLI, Factory.ai, Kiro CLI), builds are registered automatically when a deploy completes.
Auto-detection (hooks)
The PostToolUse hook watches terminal output for deploy URLs. When detected, it registers the build with Mira automatically. A .mira.json file must exist in the project root with the project ID.
| Platform | Detection | Details |
|---|---|---|
| Vercel | cli + domain | Detects `Preview:` / `Production:` CLI output and *.vercel.app domains |
| Netlify | cli + domain | Detects `Website URL:` / `Live Draft URL:` CLI output and *.netlify.app domains |
| Cloudflare Pages | cli + domain | Detects `Deployment URL` CLI output and *.pages.dev domains |
| Firebase Hosting | cli + domain | Detects `Hosting URL:` CLI output and *.web.app / *.firebaseapp.com domains |
| Render | domain only | Detects *.onrender.com domains in terminal output |
| AWS Amplify | domain only | Detects *.amplifyapp.com domains in terminal output |
| Azure Static Web Apps | domain only | Detects *.azurestaticapps.net domains in terminal output |
Vercel webhook
For git-push deploys that bypass your AI coding tool, set up a Vercel deploy webhook to auto-register builds. See the Webhooks section for setup instructions.
CI/CD integration
For platforms without webhook support, or for any CI/CD pipeline, register builds directly via the REST API:
curl -X POST https://okmira.ai/api/v1/projects/PROJECT_ID/builds \
-H "Authorization: Bearer ccf_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"url":"https://your-deploy-url","git_ref":"abc123","track":"main"}'Any platform
Any URL can be registered as a build — either via the MCP register_build tool or the REST API. The auto-detection hook and webhooks are conveniences — the platform itself is URL-agnostic.
MCP Tools Reference
The Mira MCP server exposes seventeen tools. It's a thin client that calls the REST API — no direct database access.
register_build
Registers a new build after deployment. Returns a review URL for stakeholders.
Input
{
"project_id": "project_uuid",
"url": "https://my-app-abc123.vercel.app",
"git_ref": "a1b2c3d",
"track": "feature/nav-redesign"
}Output
{
"build_id": "build_uuid",
"url": "https://my-app-abc123.vercel.app",
"deployed_at": "2026-02-20T14:30:00Z",
"review_url": "https://okmira.ai/review/build_uuid"
}fetch_feedback
Fetches actionable feedback for a build or across all builds in a project. Defaults to pending + acknowledged feedback. Optionally filter by status or type (internal/external).
Input
{
"build_id": "build_uuid", // or use project_id for cross-build
"project_id": "project_uuid", // optional — fetch across all builds
"status": "pending", // optional (defaults to pending + acknowledged)
"type": "internal" // optional
}Output
{
"build": { "id": "...", "url": "...", "deployed_at": "...", "track": "..." },
"feedback": [
{
"id": "feedback_uuid",
"author": "Sarah Chen",
"role": "Visual Designer",
"type": "internal",
"status": "pending",
"works": "Overall layout and hierarchy are solid.",
"doesnt_work": "Secondary nav color feels disconnected.",
"suggested_direction": "Try the muted teal (#5B9A8B).",
"assigned_to": "Alex Rivera"
}
],
"summary": { "total": 1, "pending": 1, "by_role": { "Visual Designer": 1 } }
}update_feedback_status
Updates the status of one or more feedback items. Valid transitions: pending → acknowledged → applied, or any → stale. Use claim/unclaim to coordinate who's handling what across your team.
Input
{
"feedback_ids": ["feedback_uuid_1", "feedback_uuid_2"],
"status": "applied",
"claim": true, // optional — claim these items for yourself
"unclaim": true // optional — release your claim
}Output
{
"updated": 2,
"feedback_ids": ["feedback_uuid_1", "feedback_uuid_2"],
"status": "applied"
}check_pending_feedback
Checks for unreviewed feedback across recent builds. Acts as an inbox to surface feedback that needs attention.
Input
{
"project_id": "project_uuid"
}Output
{
"project": "My Project",
"pending_count": 3,
"builds_with_pending": [
{
"build_id": "build_uuid",
"track": "feature/nav-redesign",
"pending_count": 2,
"roles": ["Visual Designer", "Client Stakeholder"]
}
]
}fetch_skills
Lists published skills for a project's organization. Skills encode stakeholder domain knowledge (brand guidelines, SEO rules, design standards) as reusable instruction sets. Supports optional category and slug filters.
Input
{
"project_id": "project_uuid",
"category": "Brand", // optional
"slug": "brand-voice" // optional
}Output
Found 2 skills for this project: 1. **Brand Voice** (`brand-voice`) — Brand — v2 Install path: `.agents/skills/brand-voice/SKILL.md` 2. **Accessibility Checklist** (`a11y-checklist`) — Accessibility — v1 Install path: `.agents/skills/a11y-checklist/SKILL.md` Use `install_skill` with the skill slug to get the full content.
install_skill
Fetches a single skill by slug and returns the full SKILL.md content ready for installation. The SKILL.md follows the Claude Code format: YAML frontmatter + markdown body.
Input
{
"project_id": "project_uuid",
"skill_slug": "brand-voice"
}Output
Install this skill by writing the following content to `.agents/skills/brand-voice/SKILL.md`: --- name: Brand Voice description: Tone and terminology guidelines allowed-tools: - Edit - Write --- # Brand Voice Guidelines Always use active voice... After installation, available as `/brand-voice` in Claude Code.
fetch_subagents
Lists published subagents for a project's organization. Subagents define specialized AI agent personas (code reviewers, security auditors, QA leads) with custom system prompts and tool configurations.
Input
{
"project_id": "project_uuid",
"category": "Security", // optional
"slug": "security-auditor" // optional
}Output
Found 2 subagents for this project: 1. **Code Reviewer** (`code-reviewer`) — Code Quality — v1 Install path: `.claude/agents/code-reviewer.md` 2. **Security Auditor** (`security-auditor`) — Security — v2 Install path: `.claude/agents/security-auditor.md` Use `install_subagent` with the subagent slug to get the full content.
install_subagent
Fetches a single subagent by slug and returns the full AGENT.md content ready for installation. The AGENT.md follows the Claude Code format: YAML frontmatter (tools, model, permissionMode, etc.) + markdown body (system prompt).
Input
{
"project_id": "project_uuid",
"subagent_slug": "security-auditor"
}Output
Install this subagent by writing the following content to `.claude/agents/security-auditor.md`: --- name: Security Auditor description: Reviews code for security vulnerabilities model: sonnet tools: - Read - Grep - Glob permissionMode: plan --- # Security Auditor You are a security-focused code reviewer... After installation, Claude will auto-delegate security tasks to this agent.
Feature Track tools
fetch_feature_tracks
Lists feature tracks for a project. Optionally filter by status (proposed, approved, active, merged, parked).
Input
{
"project_id": "project_uuid",
"status": "active" // optional
}Output
Found 3 feature tracks: 1. **Nav Redesign** — active (branch: feature/nav-redesign) 2 builds, 5 pending feedback 2. **Onboarding Flow** — proposed Brief: Simplify the first-run experience 3. **Dark Mode Polish** — merged Completed 2026-03-10
draft_feature_track
Previews a feature track before creating it. Returns a formatted summary for review — nothing is saved.
Input
{
"project_id": "project_uuid",
"name": "Nav Redesign",
"brief_what": "Simplify the top navigation",
"brief_who": "Marketing team and new users",
"brief_success_criteria": "Reduced bounce rate on landing page",
"brief_dependencies": "Design tokens finalized"
}Output
Feature track preview: **Nav Redesign** - What: Simplify the top navigation - Who: Marketing team and new users - Success criteria: Reduced bounce rate on landing page - Dependencies: Design tokens finalized Use create_feature_track to save this track.
create_feature_track
Creates a new feature track in proposed status. Accepts the same inputs as draft_feature_track.
Input
{
"project_id": "project_uuid",
"name": "Nav Redesign",
"brief_what": "Simplify the top navigation",
"brief_who": "Marketing team and new users"
}Output
Feature track created: **Nav Redesign** (proposed) ID: feature_track_uuid Next: use activate_feature_track to link a branch and start work.
activate_feature_track
Moves a feature track to active status and optionally links it to a git branch. Builds on that branch are automatically scoped to this track.
Input
{
"feature_track_id": "feature_track_uuid",
"track_ref": "feature/nav-redesign" // optional
}Output
Feature track activated: **Nav Redesign** — active Branch: feature/nav-redesign Builds registered on this branch will be scoped to this track.
complete_feature_track
Marks a feature track as merged. Triggers a context digest extraction from the track's feedback history.
Input
{
"feature_track_id": "feature_track_uuid"
}Output
Feature track completed: **Nav Redesign** — merged 3 builds, 8 feedback items processed A context digest will be generated for stakeholder review.
Project Context tools
fetch_project_context
Returns the project brief — persistent context that shapes how AI builds your product. Includes description, competitors, visual direction, tone, technical stack, and scope.
Input
{
"project_id": "project_uuid"
}Output
Project brief for **My Project**: **Description:** E-commerce platform for artisan goods **Competitors:** Etsy, Shopify, Not On The High Street **Visual direction:** Warm, editorial, generous whitespace **Tone:** Friendly but authoritative, never corporate **Technical:** Next.js 15, Tailwind, Neon Postgres **Scope:** MVP — product listing, search, checkout
update_project_brief
Updates one or more sections of the project brief. Only provided fields are changed — omitted fields are preserved.
Input
{
"project_id": "project_uuid",
"description": "E-commerce platform for artisan goods",
"visual_direction": "Warm, editorial, generous whitespace",
"tone": "Friendly but authoritative"
}Output
Project brief updated: Changed: description, visual_direction, tone Unchanged: competitors, technical, scope
fetch_context_digest
Returns pending context digest entries extracted from completed feature tracks. Entries represent key decisions and patterns discovered during development.
Input
{
"project_id": "project_uuid"
}Output
2 pending digest entries: 1. [Nav Redesign] "Hamburger menu performs better than mega-nav for this audience — keep mobile-first navigation pattern" 2. [Nav Redesign] "Brand teal (#5B9A8B) tested well as accent color across all nav states" Use approve_context_entry to accept or reject these entries.
approve_context_entry
Approves or rejects pending context digest entries. Approved entries become part of the project's persistent knowledge.
Input
{
"digest_id": "digest_uuid",
"approved_indices": [0, 1],
"rejected_indices": [2] // optional
}Output
Context digest reviewed: Approved: 2 entries added to project knowledge Rejected: 1 entry discarded Project context updated.
Issue tools
fetch_issues
Fetches issues reported by end users for a project. Returns severity, title, description, page URL, reporter, and status. Use this to review what users are reporting and prioritize fixes.
Input
{
"project_id": "project_uuid",
"status": "open", // optional
"severity": "critical" // optional
}Output
## 5 issues — 3 open — 1 critical ### [CRITICAL] Checkout button unresponsive on mobile — open Tapping the checkout button does nothing on iOS Safari. Happens on both iPhone 14 and iPad. **Page:** /cart **Reported by:** Jane D. _issue_id: `issue_uuid_1`_ --- ### [HIGH] Search results show wrong prices — triaged Prices in search results don't match the product page. Seems to affect items with active discounts. **Page:** /search?q=shoes **Reported by:** Anonymous _issue_id: `issue_uuid_2`_
update_issue_status
Update the status of one or more issues. Use after triaging, starting work on, or resolving an issue.
Input
{
"issue_ids": ["issue_uuid_1", "issue_uuid_2"],
"status": "in_progress"
}Output
Updated 2 issues to **in_progress**.
assign_issue
Assign an issue to a team member. Use "me" as the assigned_to_id to self-assign.
Input
{
"issue_id": "issue_uuid_1",
"assigned_to_id": "me"
}Output
Issue assigned to yourself.
Webhooks
For git-push deploys that bypass your AI coding tool, set up a Vercel deploy webhook to automatically register builds.
Setup
- Go to your project's setup tab and click Generate Webhook Token to get a dedicated
whk_token. - In your Vercel project, go to Settings → Webhooks and add a new webhook.
- Set the URL to:
https://okmira.ai/api/webhooks/vercel?token=whk_YOUR_TOKEN
- Select the
deployment.succeededevent.
The webhook captures the deploy URL, git ref, branch, and commit message. It accepts both whk_ and ccf_ tokens for backwards compatibility.
Multiple Vercel projects
By default, each Mira project uses an API key (ccf_) with a per-project webhook. The first webhook event auto-learns the Vercel project ID, and subsequent events are filtered to only that Vercel project.
For teams with multiple Vercel projects sharing one Mira project: generate a webhook token (whk_) and add it as a webhook at the Vercel team level. All matching Vercel projects will register builds into the same Mira project.
Feature tracks can be scoped to a specific Vercel project name — only builds from that project auto-link to the track. Unscoped tracks (no Vercel project name) match builds from any Vercel project with a matching branch.
Feedback Structure
Every piece of feedback has three parts, designed to give AI assistants actionable instructions:
What works
Positive feedback — what should be preserved. Prevents the AI from breaking things that stakeholders already approve of.
What doesn't work
Issues and concerns. Identifies specific problems without prescribing solutions.
Suggested direction
The gold for AI consumption. Gives the assistant a concrete instruction — not just a problem, but a direction to move in.
Feedback statuses
- Pending — not yet reviewed by the developer
- Acknowledged — developer has seen it
- Applied — feedback has been addressed in a subsequent build
- Stale — no longer relevant
Internal vs external
Internal feedback comes from your agency or team — iterative, candid working feedback. External feedback comes from clients — more formal review feedback. Developers can filter and handle these separately via the MCP tools.
Image attachments
Stakeholders can paste or drag screenshots directly into the feedback form. Images appear on the review page alongside the text feedback, and the AI coding assistant receives them inline when fetching feedback via MCP — no separate links or attachments to manage.
Hooks
Hooks enable two automations that are currently supported by Claude Code and Codex CLI:
Auto-register builds
A PostToolUse hook watches for deploy URLs in terminal output. When a deploy URL is detected, it automatically registers the build with Mira. See Deploy Platforms for the full list of supported platforms.
Check pending feedback
A SessionStart hook runs at the beginning of each session and surfaces any unreviewed feedback — e.g. "You have 3 pieces of pending feedback from your designer and SEO lead."
Hooks require a .mira.json in your project root with the project ID. API credentials are read from the MCP server config — no duplication needed.
For tools without hook support (Cursor, Windsurf, Copilot, Gemini CLI), use the Vercel webhook to auto-register builds on deploy.
Issue Widget
The embeddable issue widget lets end users report issues directly from your site. Issues appear in the Issues tab on the dashboard, and developers can fetch and triage them via MCP — no email copy-paste needed.
Setup
1. Generate a widget secret — go to the Widget tab in your project settings and generate a secret. Your server will use this to sign JWTs.
2. Sign a JWT on your server — create an HMAC-SHA256 JWT with the widget secret. The payload must include your user's ID and the project ID:
import jwt from "jsonwebtoken";
const token = jwt.sign(
{
sub: user.id, // your user's ID
pid: "YOUR_PROJECT_ID", // Mira project ID
name: user.displayName, // optional
email: user.email, // optional
},
WIDGET_SECRET,
{ expiresIn: "1h" }
);3. Embed the widget — add the script tag to your page and configure it with the signed JWT:
<script src="https://okmira.ai/api/v1/widget/script?pid=YOUR_PROJECT_ID"></script>
<script>window.Mira.configure({ jwt: serverSignedJwt });</script>
<button data-mira-issues>Report an issue</button>Activation modes
data-mira-issues— add to any element to open the widget on clickwindow.Mira.openIssueReport({ severity? })— open programmatically from your codedata-floating="true"on the script tag — shows a floating action button in the bottom-right corner
MCP integration
Developers use three MCP tools to work with issues: fetch_issues to list and filter reported issues, update_issue_status to change status (open → triaged → in progress → resolved → closed), and assign_issue to take ownership of an issue.
Translations
The widget defaults to English. Pass a labels object to translate any or all UI strings:
window.Mira.configure({
jwt: serverSignedJwt,
labels: {
heading: "Signaler un problème",
titleLabel: "Titre",
titlePlaceholder: "Brève description du problème",
descriptionLabel: "Description",
descriptionPlaceholder: "Étapes pour reproduire…",
severityLabel: "Gravité",
low: "Faible", medium: "Moyen",
high: "Élevé", critical: "Critique",
submit: "Envoyer",
submitting: "Envoi en cours…",
success: "Problème signalé. Merci !",
required: "Le titre et la description sont requis.",
// Past issues section
myIssues: "Vos signalements",
myIssuesCount: "Vos signalements ({count})",
statusOpen: "Ouvert",
statusTriaged: "Trié",
statusInProgress: "En cours",
statusResolved: "Résolu",
statusClosed: "Fermé",
}
});Partial overrides work — only the keys you provide are replaced. You can call configure again at any time to update labels dynamically (e.g. after a language switch).
How it works
The widget runs in a closed shadow DOM — no CSS conflicts with your site. Form submissions include the page URL automatically. Draft text is saved to localStorage so users don't lose their work if they navigate away. Each issue captures a title, description, and severity (low / medium / high / critical).
Plan
Feature Tracks
Feature tracks organize work into scoped features with a full lifecycle. Each track moves through four stages: proposed (drafted with a brief), approved (stakeholders agree on scope), active (linked to a branch — builds and feedback scoped automatically), and merged or parked (completed or shelved).
When a track is active and linked to a branch, any build registered on that branch is automatically associated with the track. Feedback is scoped to the feature, not scattered across unrelated builds.
Feature brief
Each track includes a brief that gives AI context about what's being built:
- What — what the feature does
- Who — who it's for
- Success criteria — how you'll know it works
- Dependencies — what needs to be in place first
MCP tools
Five MCP tools manage the feature track lifecycle: fetch_feature_tracks, draft_feature_track, create_feature_track, activate_feature_track, complete_feature_track.
Project Context
Project context is persistent knowledge that shapes how AI builds your product. It has two parts: the project brief and context digests.
Project brief
The project brief defines your product's identity across six sections:
- Description — what the product is
- Competitors — who you're positioned against
- Visual direction — design language, color palette, typography
- Tone — voice and writing style
- Technical — stack, constraints, conventions
- Scope — current boundaries and focus areas
The brief is fetched by the AI assistant at the start of work and used to inform every decision. Update it via the update_project_brief tool or through the web dashboard.
Context digests
When a feature track is completed, AI extracts key decisions and patterns from the track's feedback history. These become digest entries — pending items that stakeholders review and approve.
Approved entries become part of the project's persistent knowledge, building an evolving understanding of the product over time. Use fetch_context_digest and approve_context_entry to manage digest entries via MCP.
Teach
Skills
Skills are reusable instruction sets that encode domain knowledge — brand guidelines, SEO rules, design standards, accessibility requirements. Stakeholders create them through the web dashboard; developers install them into their AI coding tools as slash commands.
Creating skills
Stakeholders create skills from the organization dashboard. Three starting points:
- From scratch — write custom content and configure frontmatter
- From a template — 7 built-in templates (Brand Voice, Visual Design Standards, SEO Guidelines, Accessibility Checklist, Code Review Standards, Content Strategy, Performance Budget)
- Import from GitHub — import existing SKILL.md files from any repository
Installing skills
Developers use the MCP tools to discover and install skills:
fetch_skills— lists available skills with install pathsinstall_skill— returns the full SKILL.md content for a specific skill- The AI assistant writes the content to
.agents/skills/{slug}/SKILL.md - The skill becomes available as a
/slash-commandin Claude Code
SKILL.md format
Skills follow the Claude Code SKILL.md format: YAML frontmatter + markdown body.
--- name: Brand Voice description: Tone and terminology guidelines allowed-tools: - Edit - Write --- # Brand Voice Guidelines Always use active voice. Refer to the product as "Mira", never "the platform"...
Specialize
Subagents
Subagents are specialized AI agent personas with custom system prompts, tool access, model selection, and execution configuration. Stakeholders define them through the web dashboard; developers install them and Claude auto-delegates based on task description.
Creating subagents
Stakeholders create subagents from the organization dashboard. Three starting points:
- From scratch — write a custom system prompt and configure execution settings
- From a template — 7 built-in templates (Code Reviewer, QA Lead, Security Auditor, Performance Auditor, Design System Guardian, Documentation Writer, Architecture Reviewer)
- Import from GitHub — import existing AGENT.md files from community repositories
Installing subagents
Developers use the MCP tools to discover and install subagents:
fetch_subagents— lists available subagents with install pathsinstall_subagent— returns the full AGENT.md content for a specific subagent- The AI assistant writes the content to
.claude/agents/{slug}.md - Claude auto-delegates matching tasks to the installed agent
AGENT.md format
Subagents follow the Claude Code AGENT.md format: YAML frontmatter + markdown body (system prompt).
--- name: Security Auditor description: Reviews code for security vulnerabilities model: sonnet tools: - Read - Grep - Glob permissionMode: plan maxTurns: 10 --- # Security Auditor You are a security-focused code reviewer. Analyze code for OWASP Top 10 vulnerabilities...
Configuration options
The AGENT.md frontmatter supports these configuration fields:
| Field | Description |
|---|---|
| tools | Allowed tools (Read, Grep, Glob, Edit, Write, Bash, etc.) |
| disallowedTools | Explicitly denied tools |
| model | Model selection: opus, sonnet, or haiku |
| permissionMode | default, plan, or bypassPermissions |
| maxTurns | Maximum agentic turns before stopping |
| skills | Skills to include as context |
| memory | Persistent memory across invocations |
| isolation | Run in isolated worktree |
| background | Run as background task |