Intro to the operator-to-orchestrator framework
This one-pager is based on Stop prompting AI tools. Start managing AI teams, featuring Matt Hastings of MVP Club, published on AI Lab by ActiveCampaign.

Get the one-pager
The big idea
Most people use AI the way they use traditional software: press a button, expect a result. Hastings argues that the real shift is treating AI like a new hire who needs onboarding, clear documentation, and ongoing feedback. When you manage AI instead of operating it, you can step back from execution and work on multiple projects simultaneously.
The framework
| Phase | What you do | What the AI gets |
| 1. Document | Write agent persona, procedures, context, constraints | A clear job description and all the resources it needs to succeed |
| 2. Build | Let the agent do the work with your documentation as its guide | A first attempt at the task, informed by your instructions |
| 3. Refine | Evaluate output, share what’s wrong, update documentation | Corrected instructions that prevent the same mistakes next time |
Repeat the loop 5–6 times before expecting consistent results. Each cycle tightens the documentation and narrows the gap between what you want and what the AI delivers.
The orchestrator layer: Once individual agents are trained, create an orchestrator agent that interprets your commands and dispatches work to the right subagent. Each subagent has its own context window and receives prompts from the orchestrator.
When to use this
- You keep restarting AI projects because the output is wrong on the first try
- You rely on your own review of a single agent’s output and the process feels slow
- You want to run multiple AI workflows simultaneously without bottlenecking on your attention
Common mistakes
- Expecting results on the first try → Plan for 5–6 iterations before the system stabilizes
- Skipping documentation and going straight to prompting → Write the agent’s “job description” before giving it any tasks
- Reviewing everything yourself instead of building agent-to-agent checks → Set up a skeptical agent that pushes back on the planning agent’s output
- Giving up after one or two bad outputs → Treat bad output as feedback that tightens your documentation
Quick-start
Start by picking the smallest repeatable task in your workflow and writing a one-page document that explains what the AI’s job is, what good output looks like, and what to avoid. Run it, evaluate the output with the LLM, update the document, and repeat until the results are consistent.
Ready for the full story?
Read Stop prompting AI tools. Start managing AI teams, featuring Matt Hastings of MVP Club, published on the AI Lab by ActiveCampaign.
Ready to build instead?
Get the step-by-step checklist Matt used to build an AI agent team
Related
More data from the AI Lab.