Steps to build a starter AI thinking system
From The 3-level ladder that separates AI prompters from AI thinkers published on AI Lab, engineered by ActiveCampaign

Get the step-by-step checklist
By the end of this checklist, you’ll have a working AI thinking system that turns scattered documents (books, internal docs, transcripts, campaigns) into a reusable knowledge architecture you can query, summarize, and build on. It’s based on Bryan Cassady’s 90-day adoption framework from The Generative Organization AI Playbook. Plan for a focused first weekend to set up the foundation, then continuous refinement over the following weeks.
Before you start
Make sure you have the following ready before you start building your workflow.
- Account access to NotebookLM, Gemini, and PerplexityActiveCampaign account (with CRM access)
- A folder of source material you can upload: proposals, frameworks, customer interview transcripts, past campaigns, decision rules
- One sentence describing what your system needs to answer six months from now
- An hour of uninterrupted time to draft the objective and pull source material
The workflow
Phase 1: Define the objective
After this phase, you’ll have: a single forcing question that determines what gets uploaded, what research matters, and what outputs are useful.
- Write the forcing question: Answer this in one sentence: “What is the single most important thing my system should be able to answer six months from now?”
- Pressure-test it: If the question doesn’t determine which documents are worth uploading, it’s too broad. Sharpen until it does.
- Write it down somewhere visible: Pin it in your notes, your project folder, or the description field of your first notebook. Every later decision points back to it.
Phase 2: Do external research against the objective
After this phase, you’ll have: a permanent NotebookLM library seeded with canonical sources, plus a project-specific notebook for your live challenge.
- Identify the three or four questions that matter most: These are the questions your system needs to be able to answer. Write them as bullet points under your forcing question.
Starter prompt (copy and edit):
Given this objective: [PASTE_FORCING_QUESTION], what are the 3–4 sub-questions a research library would need to answer to support it? Don’t pad the list.
- Build the standing library in NotebookLM: Upload canonical sources—books, foundational reports, frameworks—that will stay relevant across many projects. Treat this as a reference shelf, not a project workspace.
- Spin up a focused second notebook for the live challenge: Reuse the canonical PDFs but add project-specific files. Cassady runs his system this way: one permanent shelf, one fresh workspace per challenge.
- Use Gemini for synthesis against the standing library: Connect Gemini to your NotebookLM and ask it to synthesize across the canonical sources. This is where deeper patterns surface.
- Use Perplexity for anything recent: Perplexity surfaces citations that other tools bury. Use it when the question depends on current data the standing library can’t reach.
Phase 3: Upload internal knowledge
After this phase, you’ll have: a system grounded in your team’s actual work—not generic best practices.
- Pull the proposals that won: Not all of them. The ones that closed.
- Pull the frameworks your team actually uses: Decision rules, naming conventions, internal playbooks. If a framework lives only in someone’s head, write it down before uploading.
- Pull customer interview transcripts: Real voice beats summary. Raw transcripts produce sharper outputs than cleaned-up notes.
- Pull past campaign artifacts: Briefs, post-mortems, performance data. Specificity is the goal.
- Audit for AI-generated content: Anything that was drafted by AI and never humanized contaminates the library. Either rewrite it in your team’s voice or leave it out.
If you upload content that was AI-generated, you’ve infected your system with something that sounds nothing like the actual human you’re trying to build around.
Phase 4: Generate ideas in bulk
After this phase, you’ll have: hundreds of options against your defined objective—enough that the right answer becomes obvious rather than chosen by default.
- Run a bulk-generation pass against the objective: Cassady uses tools that produce hundreds of variations in about 15 minutes. The goal isn’t to use every idea—it’s to have enough options that the strongest one stands out.
Starter prompt (copy and edit):
Using the sources in this notebook, generate [50–100] distinct approaches to [PASTE_FORCING_QUESTION]. Vary the angle: audience, tone, mechanism, channel, scope. Don’t repeat ideas. Don’t filter for quality yet.
- Cluster the output: Group similar ideas. The clusters tell you where the obvious answers live and where the unexplored territory is.
- Pick the 3–5 strongest ideas to develop further: Apply SPARKS to each one—starting with S (speak it out) on what makes the idea worth developing.
Phase 5: Refine continuously
After this phase, you’ll have: a habit, not a finished product. The system gets sharper as the inputs get better.
- Run every output through SPARKS before it goes back into the system: Every artifact you save into the library should have been pressure-tested—S (speak), P (pivot), A (ask), R (reframe), K (keep asking), S (stop and think).
- Keep your bullshit hat handy: Before you read any AI output, deliberately put on the skeptic’s mindset. Without it, flattery looks like validation.
- Review the library quarterly: Remove sources that no longer reflect your work. Add new proposals, transcripts, and frameworks as they accumulate.
- Track what the system is good at: Note which questions it answers well and where it still falls short. The gaps tell you what to upload next.
Quick reference
- Total time: ~4–6 hours initial setup; ongoing refinement over 90 days
- Tools needed: NotebookLM (standing library), Gemini (synthesis), Perplexity (recent citations), plus whichever LLM you use for SPARKS prompting
- Key output: A reusable AI thinking system that answers your six-month forcing question and gets sharper every time you feed it back
Related
More data from the AI Lab.