How to Write Effective Agent Definitions
A practical guide to writing AI agent definitions that produce consistent, high-quality results. Covers persona design, rule writing, workflow structuring, and common mistakes to avoid.
What makes a great AI agent definition?
The difference between a mediocre agent and a great one isn't length — it's structure. A well-designed agent gives the AI clear boundaries, a consistent process, and enough context to make good judgment calls.
Here's what separates agents that work from agents that don't.
How should you define the agent's persona?
The opening lines of your agent set the tone for everything that follows. Be specific about expertise level and domain:
Weak: "You are an AI assistant that helps with code."
Strong: "You are a senior security engineer who performs thorough security assessments on codebases. You think like an attacker but communicate like a consultant."
The strong version tells the AI three things: seniority level (senior), approach (think like an attacker), and communication style (consultant, not hacker). Every response will be shaped by these constraints.
How do you define an agent's capabilities?
Don't just say what the agent does — enumerate what it can do. This creates a mental model the AI uses to scope its responses:
## Your capabilities
### Application Security
- Injection flaws — SQL injection, XSS, command injection, SSRF
- Authentication & authorization — broken auth, privilege escalation
- Data exposure — sensitive data in logs, hardcoded secrets
### Dependency Security
- Known CVE scanning across npm, pip, cargo ecosystems
- Transitive dependency risk assessment
- Supply chain attack vectors
This structure helps the AI understand its coverage area and respond appropriately when asked about topics outside its scope.
What rules should an agent definition include?
Rules should address failure modes you've actually seen, not hypothetical concerns:
Weak rules:
- "Be helpful and accurate"
- "Follow best practices"
Strong rules:
- "Never suggest changes that only match your personal style preference"
- "Prioritize by real-world exploitability, not just CVSS score"
- "If you're unsure about something, say so — don't present guesses as definitive issues"
Each strong rule addresses a specific failure mode: style-biased reviews, theoretical-only security assessments, and false confidence.
How should you structure the agent's workflow?
Agents that define a step-by-step process produce more consistent results than agents that just describe capabilities:
## Your review process
1. **Understand context** — Read the PR description and surrounding code
2. **Check correctness** — Verify logic handles all edge cases
3. **Assess security** — Look for OWASP Top 10 issues
4. **Evaluate performance** — Identify N+1 queries and memory leaks
5. **Provide feedback** — Give severity-rated suggestions with code examples
This workflow ensures the AI doesn't skip steps or front-load effort on the wrong things.
Why is defining the output format important?
The most underrated section of an agent definition is the output format. Without it, responses vary wildly in structure:
## Review output format
For each issue found, provide:
- **Severity**: Critical / Warning / Suggestion / Nitpick
- **Location**: File and line reference
- **Issue**: Clear description of the problem
- **Fix**: Specific suggestion or code example
This turns free-form text into structured, actionable output that's consistent across every use.
How do you add MCP server and skill references?
If your agent benefits from MCP servers or skills, include them with copy-paste configuration:
## Skills and tools
### MCP Servers
Add to your `.mcp.json`:
```json
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp"]
}
}
}
```
- **Playwright MCP** — Verify UI changes with browser-based checks.
This makes your agent immediately actionable — users don't have to research how to set up the tools it references.
What are the most common agent definition mistakes?
Too vague: "Help with marketing." This gives the AI no constraints and produces generic output.
Too rigid: Scripting every possible response. The AI needs room to adapt to the specific situation.
Too long: Agents over 2,000 words often contain redundant instructions that dilute the important parts.
No rules: Without explicit constraints, the AI defaults to its base behavior — which may not match your needs.
No output format: Inconsistent response structure makes agents unreliable in workflows.
How do you publish your agent?
Once your agent is written, upload it to Agent Shelf or use the publish skill directly from your coding environment. The community benefits from every well-crafted agent — and you'll get download stats and feedback to improve it over time.
Written by Agent Shelf Team
The Agent Shelf team builds open infrastructure for AI agent discovery and distribution. We maintain the Agent Shelf registry, MCP server, and publish skill.
The Agent Skills Specification, Explained
Nextarrow_forwardGetting Started with MCP Servers: Connect Your AI Tools to the World