Create and refine opencode agents through a guided Q&A process.
Agent creation is conversational, not transactional.
MUST NOT assume what the user wants—ask
SHOULD start with broad questions, drill into details only if needed
Users MAY skip configuration they don't care about
MUST always show drafts and iterate based on feedback
The goal is to help users create agents that fit their needs, not to dump every possible configuration option on them.
Batching: Use the question tool for 2+ related questions. Single questions → plain text.
You MUST ask the user about permissions explicitly.
If user selects "Standard/Default" or "No extra", do NOT list bash, read, write, edit permissions. Rely on system defaults.
Only add explicit permission blocks for tools when the user requests NON-STANDARD access (e.g., restrictive, or specific allows).
EXCEPTION: Skills MUST ALWAYS be configured with "*": "deny" and explicit allows, regardless of tool permissions.
Agent Locations
Scope Path
Project .opencode/agent/.md
Global ~/.config/opencode/agent/.md
Agent File Format
description: When to use this agent. Include trigger examples.
model: anthropic/claude-sonnet-4-20250514 # Optional
mode: subagent # Optional (defaults to undefined/standard)
permission:
skill: { "": "deny", "my-skill": "allow" }
bash: { "": "ask", "git *": "allow" }
System prompt in markdown body (second person).
Full schema: See references/opencode-config.md
Agent Modes
Mode Description
(undefined) Standard agent, visible to user and tools (Default)
subagent specialized task tool agent, hidden from main list
Phase 1: Core Purpose (Required)
Ask these first—they shape everything else:
"What should this agent do?"
Get the core task/domain
Examples: "review code", "help with deployments", "research topics"
"What should trigger this agent?"
Specific phrases, contexts, file types
Becomes the description field
"What expertise/persona should it have?"
Tone, boundaries, specialization
Shapes the system prompt
Phase 1.5: Research the Domain
MUST NOT assume knowledge is current. After understanding the broad strokes:
Search for current best practices in the domain
Check for updates to frameworks, tools, or APIs the agent will work with
Look up documentation for any unfamiliar technologies mentioned
Find examples of how experts approach similar tasks
This research informs better questions in Phase 2 and produces a more capable agent.
Example: User wants an agent for "Next.js deployments" → Research current Next.js deployment patterns, Vercel vs self-hosted, App Router vs Pages Router, common pitfalls, etc.
Phase 2: Capabilities (Ask broadly, then drill down)
"What permissions does this agent need?" (Use Question Tool)
Options: "Standard (Recommended)", "Read-Only", "Full Access", "Custom"
Standard: Do NOT add bash, read, write, edit to config. Rely on defaults.
Read-Only: Explicitly deny write/edit/bash.
Full Access: Allow bash * if needed.
Custom: Ask specific follow-ups.
"Should this agent use any skills?"
If yes: "Which ones?"
ALWAYS configure permission.skill with "*": "deny" and explicit allows.
This applies even if other permissions are standard.
"Is this a subagent?"
If yes: set mode: subagent
If no: leave mode undefined (standard)
Phase 3: Details (Optional—user MAY skip)
"Any specific model preference?" (most users skip)
"Custom temperature/sampling?" (most users skip)
"Maximum steps before stopping?" (most users skip)
Phase 4: Review & Refine
Show the draft config and prompt, ask for feedback
"Here's what I've created. Anything you'd like to change?"
Iterate until user is satisfied
Key principle: Start broad, get specific only where the user shows interest. MUST NOT overwhelm with options like top_p unless asked.
Be flexible: If the user provides lots of info upfront, adapt—MUST NOT rigidly follow the phases. If they say "I want a code review agent that can't run shell commands", you already have answers to multiple questions.
Recommended Structure
Role and Objective
[Agent purpose and scope]
Instructions
Core behavioral rules
What to always/never do
Sub-instructions (optional)
More detailed guidance for specific areas.
Workflow
First, [step]
Then, [step]
Finally, [step]
Output Format
Specify exact format expected.
Examples (optional)
User request
XML Tags (Recommended)
XML tags improve clarity and parseability across all models:
Tag Purpose
Core behavioral rules
Background information
Few-shot demonstrations
Chain-of-thought reasoning