Writing Agent Definitions for Teams
How to standardize AI agent definitions across your team. Covers conventions, code review for agents, sharing, version management, and common organizational patterns.
Why teams need agent conventions
When one developer uses AI agents, the setup is simple: write what works for you, iterate, move on. When a team uses agents, new problems emerge.
Different developers write agents differently. One person's code review agent produces a severity table. Another's produces a bullet list. A third person doesn't use a code review agent at all — they just prompt the AI from scratch each time. The result: inconsistent quality, duplicated effort, and no shared improvement.
Team agent conventions solve this by answering three questions: Which agents does the team use? Where do they live? How do we improve them?
Organizing agent definitions
Repository-level agents
Store agents that are specific to a project in the repository itself:
your-project/
├── .agents/
│ └── agents/
│ ├── code-reviewer.md
│ ├── test-writer.md
│ └── api-designer.md
├── AGENTS.md ← project context
└── src/
These agents are versioned with the project code, reviewed in PRs, and available to everyone who clones the repo. Since they live in .agents/agents/, they're discovered automatically by Claude Code, Cursor, Windsurf, Codex, and other tools.
Use repo-level agents for: project-specific workflows, coding standards that apply to this codebase, agents that reference project-specific patterns or libraries.
Organization-level agents
For agents that apply across multiple projects — your team's code review process, documentation standards, or security audit checklist — publish them to Agent Shelf under a shared account and install them in each repo.
Alternatively, maintain them in a dedicated repository:
your-org/agent-definitions/
├── coding/
│ ├── code-reviewer.md
│ └── test-writer.md
├── security/
│ └── security-auditor.md
└── writing/
└── doc-writer.md
Teams can then install these into projects using the AgentShelf skill or by copying them manually.
Use org-level agents for: company-wide coding standards, shared review processes, compliance-related checks, and any expertise that spans multiple projects.
Personal agents
Individual developers often have agents that reflect their personal workflow — a specific debugging approach, a note-taking style, or a way of exploring unfamiliar code. Store these in the user-level agent directory (~/.agents/agents/ or ~/.claude/agents/), not in the project.
Personal agents are fine for individual productivity. But if a personal agent is consistently useful, consider proposing it as a team agent.
Establishing conventions
Agent naming
Consistent naming helps the team find and reference agents:
- Use descriptive, lowercase slugs:
code-reviewer,test-writer,security-auditor - Include the task, not the tool:
api-doc-writernotswagger-generator - Avoid personal names or inside jokes:
code-reviewernotbobs-review-bot - Use consistent suffixes for similar agents:
*-reviewer,*-writer,*-generator
Frontmatter standards
Agree on frontmatter conventions:
---
id: "code-reviewer" # lowercase slug
name: "Code Reviewer" # human-readable
description: "Reviews code changes for..." # start with what it does
version: "1.2.0" # SemVer, always bump
category: "coding" # from the 14 standard categories
tags: ["code-review", "typescript"] # specific, not generic
license: "MIT" # or your company's preference
---
For deeper guidance on each field, see Agent Definition Frontmatter: Every Field Explained.
Structure template
Give the team a standard structure for agent bodies:
# Agent Name
[One paragraph: who this agent is and what it does]
## Workflow
[Numbered steps defining the process]
## Rules
[Bullet points: what to do and what not to do]
## Output format
[How the output should be structured]
Not every agent needs all sections, but having a common template makes agents predictable and easier to review.
Code review for agents
Agent definitions should go through the same review process as code. They define behavior that affects the entire team, and small wording changes can have outsized effects on AI output.
What to review
Persona. Is the persona appropriate? A "junior developer" persona will produce different output than a "senior engineer with 15 years of security experience." Make sure the persona matches the intended use.
Rules. Are the rules specific and actionable? "Check for security issues" is too vague. "Check for SQL injection in any function that takes user input and constructs a query" is reviewable.
Workflow. Does the workflow make sense? Are the steps in the right order? Would you follow this process yourself?
Output format. Is the output format useful for the team? If the agent produces a structured report, check that the fields are what the team actually needs.
Edge cases. What happens with unusual inputs? A code review agent should handle empty PRs, massive PRs, and PRs that only change documentation. Are there instructions for edge cases?
Review checklist
A simple checklist for agent definition PRs:
- [ ] Persona is specific and appropriate for the task
- [ ] Rules are actionable, not vague
- [ ] Workflow steps are in logical order
- [ ] Output format is defined and useful
- [ ] Version is bumped (patch/minor/major as appropriate)
- [ ] Changelog describes what changed
- [ ] No project-specific details in org-level agents
- [ ] Examples are included where behavior might be ambiguous
Managing versions across teams
Pinning versions
For consistency, pin agent versions in your project configuration. If every developer uses code-reviewer v1.2.0, they get the same review behavior. Without pinning, one developer might have an older version that skips a security check the team now considers mandatory.
When installing from Agent Shelf, download a specific version. When storing in-repo, the git commit history serves as your version control.
Upgrade workflow
When an agent gets a new version:
- One team member reviews the changelog
- They test the new version on a representative PR or task
- If it's a minor or patch bump and tests pass, update the project
- If it's a major bump, discuss in team standup before upgrading
For more on version management, see How to Version Your AI Agents with SemVer.
Changelog discipline
Every agent update should include a changelog entry. This is especially important for team agents where behavior changes affect everyone:
## 1.3.0
### Added
- Now checks for React useEffect dependency array completeness
- Added explicit rule for consistent error boundary usage
### Changed
- Severity for missing null checks raised from MEDIUM to HIGH
(per team decision in 2026-04-01 retro)
Note the reference to a team decision. This connects the agent change to the discussion that motivated it.
Common team patterns
Shared code review agent
The most popular team agent. Defines the team's review process, severity levels, what to check for, and the output format. Everyone on the team uses the same agent, producing consistent, predictable reviews.
Key decisions to make:
- What severity levels to use and what each means
- What categories to check (security, performance, correctness, style)
- Whether to include code fix suggestions or just identify issues
- Output format (table, list, structured findings)
Project setup agent
An agent that knows how to set up new components, endpoints, or features in your specific project. It reads your existing codebase patterns and produces new code that matches.
Key decisions:
- What patterns to follow for each type of setup
- What files need to be created (component, test, story, index export)
- What boilerplate to include vs. leave out
- Naming conventions and file placement
Onboarding agent
Helps new team members understand the codebase. It knows the architecture, can explain how systems work by reading the code, and guides new developers through common tasks.
Key decisions:
- What architecture documentation to include
- How to explain the most common developer tasks
- What gotchas or non-obvious patterns to highlight
- Where to point people for more information
Release agent
Manages the release process: generates changelogs from git history, checks that version numbers are bumped, verifies that documentation is updated, and produces release notes.
Key decisions:
- Changelog format (Keep a Changelog, custom)
- What counts as a user-facing change vs. internal
- How to handle breaking changes in release notes
- Whether to include links to PRs and issues
Measuring agent effectiveness
How do you know if your team's agents are working?
Consistency check
Compare the output of the same agent across different team members and different codebases. If the output is consistent — same format, similar depth, same types of findings — the agent is well-defined. If output varies wildly, the instructions need tightening.
Feedback loop
After a sprint or month, ask the team:
- Which agents do you use most?
- Which agents produce the most useful output?
- Which agents are you ignoring or working around?
- What tasks do you wish had an agent?
Use the feedback to improve existing agents and identify new ones to build.
Iteration cadence
Agents aren't set-and-forget. Schedule periodic reviews (monthly or quarterly) to:
- Update agents based on team feedback
- Remove agents nobody uses
- Add agents for newly identified needs
- Adjust rules that are too strict or too permissive
Getting started
- Pick one agent. Start with a code review agent — it's the most universally useful and easiest to evaluate.
- Write it together. Draft the agent as a team, not in isolation. Different perspectives produce better rules.
- Use it for a week. Everyone uses the same agent for all code reviews for one week.
- Retrospect. What worked? What didn't? What needs to change?
- Iterate and expand. Refine the agent, bump the version, then consider adding a second agent for testing or documentation.
For starter templates, see Agent Definition Templates. For writing guidance, see How to Write Effective Agent Definitions. To share your team's agents with the community, publish them on Agent Shelf.
Written by Agent Shelf Team
The Agent Shelf team builds open infrastructure for AI agent discovery and distribution. We maintain the Agent Shelf registry, MCP server, and publish skill.
Best AI Agents for Code Review in 2026
Nextarrow_forwardThe Case for Portable Agent Definitions