Zum Inhalt springen
>_<
AI EngineeringWiki

Scaling and Customizing Agent Teams

Patterns Β· 9 min

An agent team grows with your requirements. New agents follow a five-step process: define role, create bot, write configuration, set up polling, register with manager. When scaling to five or more agents, API costs, machine resources, and coordination complexity become important factors.

Adding a New Agent

Adding a new agent follows a repeatable process. Here is an example with a customer support agent:

1

Define the Role

What does this agent do? The customer support agent reads incoming customer emails, drafts appropriate responses, categorizes the inquiry, and escalates complex issues to a human.

2

Create Bot Account

Create a new bot in the team chat (e.g., Team-Chat) with a descriptive name. Generate the access token and store it securely.

3

Write Configuration

Create a CLAUDE.md that defines the role, tools, and context. The support agent needs access to product documentation, pricing information, return policies, and the brand voice guide. It should be able to read and write files but not execute system commands.

4

Configure Polling

Configure the polling script with the new bot's token and user ID and set it to monitor the appropriate channels.

5

Register with Manager

Update the manager's CLAUDE.md to include the new agent in its roster and delegation rules. Define when the manager should route tasks to the new agent.

Adjusting Existing Roles

  • Expand scope: Give the coding agent responsibility for database migrations in addition to feature development
  • Narrow scope: Split a generalist agent into two specialists when the workload grows

When narrowing scope: Update the manager's delegation rules. If you split the coding agent into frontend and backend, the manager needs to know which type of task goes where.

Multi-Agent Workflows

WorkflowSequence
Content PipelineResearch Agent creates brief β†’ Content Agent writes post β†’ QA Agent reviews β†’ Manager coordinates
DeploymentCoding Agent prepares release β†’ QA tests β†’ Coding deploys β†’ Monitoring verifies β†’ Content writes release notes
Customer OnboardingSupport receives customer β†’ Content creates welcome email β†’ Coding provisions account β†’ Manager tracks completion

Scaling Factors

FactorDetailsAction
API CostsEach agent consumes credits when processing tasksAdjust polling intervals: busy agent every 30s, quiet agent every 5min
Machine ResourcesEach instance uses memory and CPU2-3 agents per machine, distribute across machines for larger teams
Channel NoiseMore agents = busier main channelCreate sub-channels for different workflows
CoordinationWith 5+ agents: casual coordination is not enoughDocument structured delegation rules, priority systems, conflict resolution

Advanced Patterns

Scheduled Agents

Not all agents need to respond to mentions. Some work on schedules: the monitoring agent checks infrastructure every five minutes, the analytics agent compiles daily reports, the backup agent verifies backups every night. For scheduled agents, replace the polling loop with a cron-triggered script.

Cross-Agent Memory

Agents can share context through a common MEMORY.md file. The coding agent notes a deployment, the QA agent then knows what to test, the monitoring agent knows what to watch for. Shared memory creates team awareness without direct communication.

Graceful Degradation

When an agent goes offline, the system should not grind to a halt. The manager should recognize when a delegated task gets no response and either retry, delegate to an alternative agent, or escalate to a human. Build this timeout logic into the manager's CLAUDE.md.

Further Reading

Sources

Next step: move from knowledge to implementation

If you want more than theory: setups, workflows and templates from real operations for teams that want local, documented AI systems.

Why AI Engineering
  • Local and self-hosted by default
  • Documented and auditable
  • Built from our own runtime
  • Made in Austria
Not legal advice.