Steps for building your AI agent team
This resource is based on Stop prompting AI tools, start managing AI teams, featuring Matt Hastings of MVP Club, published on the AI Lab by ActiveCampaign.

Get the checklist
By the end of this checklist, you’ll have a documented, tested AI agent that handles a repeatable task in your workflow. It’s based on the approach Matt Hastings uses at MVP Club to train AI agents as team members rather than tools.
Expect to spend 2–3 hours on initial setup, with 30-minute refinement sessions over the following week.
Before you start
- Choose an LLM platform: Claude or ChatGPT with a paid plan that supports projects or custom instructions
- Identify one repeatable task: Pick the smallest unit of work in your workflow that you do regularly (writing copy, pulling campaign reports, researching competitors)
- Gather your reference materials: Any documents, examples of good output, or process notes that describe how you currently do this task
One way to think about creating an agent is that you’re writing almost a persona or a character.
The workflow
Phase 1: Document your agent’s job
After this phase, you’ll have: a written document that describes what the AI agent’s job is, how it should work, and what good output looks like
- Start a conversation with your LLM about your goal: Before writing any documentation, have the LLM ask you questions to surface assumptions you may have missed.
Starter prompt:
I want to build an AI agent that handles [TASK] for my [ROLE/BUSINESS]. Before I start setting it up, ask me questions about my goal so we can make sure I haven’t missed anything important.
- Research how others have structured similar agents: Ask the LLM to look at how other people have approached this type of task with AI agents.
Starter prompt (copy and edit):
Now that you understand my goal, research how other people have structured AI agents to [TASK]. What documentation and setup have they found effective?
- Now that you understand my goal, research how other people have structured AI agents to [TASK]. What documentation and setup have they found effective?
Starter prompt:
Based on our conversation and your research, write a document that describes this agent’s job. Include: a description and backstory, procedures it should always follow, things it should never do, and how it should handle edge cases. Write it so another AI agent reading this document would know exactly what to do.
- Review and refine the documentation: Read through the document the LLM produced and add any missing context, examples, or constraints that you know from experience.
Phase 2: Build and test
After this phase, you’ll have: a working agent that has attempted your task at least once, with output you can evaluate
- Upload the documentation to your agent: In Claude, add it to a Project. In ChatGPT, add it to a custom GPT’s instructions or a project folder. In VS Code with an AI extension, add it as a file in your project directory.
- Give the agent its first task: Provide a real example of the work you want it to do, along with any inputs it needs (data, context, prior examples).
- Evaluate the output: Compare what the agent produced against what you would have done yourself. Note what’s right, what’s wrong, and what’s missing.
Your control is at the beginning and at the end, and in setting the pace of iteration.
Phase 3: Refine with feedback
After this phase, you’ll have: an agent that produces consistent, good output after 5–6 rounds of iteration
- Share the output with the original LLM and ask it to evaluate: Go back to the LLM that helped you write the documentation and show it the agent’s output.
Starter prompt:
Here’s the output my agent produced: [PASTE OUTPUT]. Does this meet the goal we discussed? What’s wrong with it, and what would need to change in the documentation to prevent these issues?
- Update the documentation based on feedback: Apply the suggested changes to your agent’s documentation. Be specific about what went wrong and add examples of what “good” looks like.
- Run the agent again and repeat: Give it another task using the updated documentation. Repeat the evaluate-and-refine cycle 5–6 times until the output is consistently good.
- Add an orchestrator layer (optional): Once you have multiple trained agents, create a central orchestrator agent that interprets your commands and dispatches work to the right subagent.
Quick reference
- Total time: 2–3 hours for initial setup, then 30-minute refinement sessions over the following week
- Tools needed: Claude or ChatGPT (paid plan), a text editor for documentation
- Key output: A documented AI agent that handles a repeatable task with consistent quality
Ready for the full story?
This resource is based on Stop prompting AI tools, start managing AI teams, featuring Matt Hastings of MVP Club, published on the AI Lab by ActiveCampaign.
Related
More data from the AI Lab.