Scaling and Customizing Agent Teams
Patterns Β· 9 min
An agent team grows with your requirements. New agents follow a five-step process: define role, create bot, write configuration, set up polling, register with manager. When scaling to five or more agents, API costs, machine resources, and coordination complexity become important factors.
Adding a New Agent
Adding a new agent follows a repeatable process. Here is an example with a customer support agent:
Define the Role
What does this agent do? The customer support agent reads incoming customer emails, drafts appropriate responses, categorizes the inquiry, and escalates complex issues to a human.
Create Bot Account
Create a new bot in the team chat (e.g., Team-Chat) with a descriptive name. Generate the access token and store it securely.
Write Configuration
Create a CLAUDE.md that defines the role, tools, and context. The support agent needs access to product documentation, pricing information, return policies, and the brand voice guide. It should be able to read and write files but not execute system commands.
Configure Polling
Configure the polling script with the new bot's token and user ID and set it to monitor the appropriate channels.
Register with Manager
Update the manager's CLAUDE.md to include the new agent in its roster and delegation rules. Define when the manager should route tasks to the new agent.
Adjusting Existing Roles
- Expand scope: Give the coding agent responsibility for database migrations in addition to feature development
- Narrow scope: Split a generalist agent into two specialists when the workload grows
When narrowing scope: Update the manager's delegation rules. If you split the coding agent into frontend and backend, the manager needs to know which type of task goes where.
Multi-Agent Workflows
| Workflow | Sequence |
|---|---|
| Content Pipeline | Research Agent creates brief β Content Agent writes post β QA Agent reviews β Manager coordinates |
| Deployment | Coding Agent prepares release β QA tests β Coding deploys β Monitoring verifies β Content writes release notes |
| Customer Onboarding | Support receives customer β Content creates welcome email β Coding provisions account β Manager tracks completion |
Scaling Factors
| Factor | Details | Action |
|---|---|---|
| API Costs | Each agent consumes credits when processing tasks | Adjust polling intervals: busy agent every 30s, quiet agent every 5min |
| Machine Resources | Each instance uses memory and CPU | 2-3 agents per machine, distribute across machines for larger teams |
| Channel Noise | More agents = busier main channel | Create sub-channels for different workflows |
| Coordination | With 5+ agents: casual coordination is not enough | Document structured delegation rules, priority systems, conflict resolution |
Advanced Patterns
Scheduled Agents
Not all agents need to respond to mentions. Some work on schedules: the monitoring agent checks infrastructure every five minutes, the analytics agent compiles daily reports, the backup agent verifies backups every night. For scheduled agents, replace the polling loop with a cron-triggered script.
Cross-Agent Memory
Agents can share context through a common MEMORY.md file. The coding agent notes a deployment, the QA agent then knows what to test, the monitoring agent knows what to watch for. Shared memory creates team awareness without direct communication.
Graceful Degradation
When an agent goes offline, the system should not grind to a halt. The manager should recognize when a delegated task gets no response and either retry, delegate to an alternative agent, or escalate to a human. Build this timeout logic into the manager's CLAUDE.md.
Further Reading
- β’ Agent Roles β Understanding the role model
- β’ Building an AI Agent Team β From zero to first team
- β’ Memory Management β How agents share knowledge across sessions
Sources
- Anthropic: Agent Patterns β Multi-Agent Orchestration
- Anthropic: Claude Code Overview β Skills, agents, hooks
Next step: move from knowledge to implementation
If you want more than theory: setups, workflows and templates from real operations for teams that want local, documented AI systems.
- Local and self-hosted by default
- Documented and auditable
- Built from our own runtime
- Made in Austria