Prompts to use with the SPARKS method
From The 3-level ladder that separates AI prompters from AI thinkers published on AI Lab, engineered by ActiveCampaign

Get the prompt template
How to use these prompts
These are ready-to-use prompts pulled from Bryan Cassady’s SPARKS method. Copy, paste, and swap in details where you see [BRACKETS]. Every AI tool and model behaves a little differently, so treat what comes back as a starting point and refine from there. The prompts are designed to be run in sequence on a single project—each one builds on the output of the previous.
Prompt 1: Speak it out
Best for: The first move on any new AI-assisted task—getting your messy thinking on the page before AI tries to help.
Use with: Any LLM (ChatGPT, Claude, Gemini)
Prompt 1
I’m working on [describe your project or challenge in 2–3 sentences]. Here’s everything I know, everything I’m unsure about, and what I’m actually trying to figure out: [dump it all without editing]. Help me clarify my thinking before we do anything else.
Variables to fill in:
- [describe your project or challenge in 2–3 sentences] — e.g., “I’m planning a launch campaign for a new product feature aimed at solo marketers.”
- [dump it all without editing] — Everything in your head: what you know, what you’re unsure about, what you’ve tried, what’s at stake. Resist the urge to clean it up.
What to expect: The AI should reflect your thinking back with sharper structure—naming the audience you implied, the constraints you mentioned in passing, the questions you didn’t realize you were asking. If it jumps straight to writing copy, stop and re-prompt.
Cadence note: Run this once at the start of every new project. Don’t reuse it across projects—the messy dump only works when it’s specific to the current challenge.
Prompt 2: Pivot roles
Best for: Forcing AI to interview you before it generates anything. Stops it from filling in blanks you should fill yourself.
Use with: Any LLM
Prompt 2
Before you write anything, I need you to interview me. Ask me [3–5] questions about my audience, my objective, my constraints, and what success looks like. Do not generate anything until I have answered all your questions.
Variables to fill in:
- [3–5] — Number of questions. Five is the upper bound for most projects; three is enough for shorter tasks.
What to expect: A numbered list of questions. Good ones probe audience specifics, success criteria, and constraints you haven’t articulated. Answer each one before the AI moves forward.
Where it goes wrong: If you find yourself adding “…but also go ahead and write a draft” at the end, delete it. That defeats the purpose. Hold the line on the interview.
Prompt 3: Ask for more
Best for: Critiquing an initial output. AI’s first response is almost always competent and complete-looking—and almost always mediocre.
Use with: Any LLM
Prompt 3
Here’s your output: [paste it]. Now tell me: What are the three weakest points? What’s the strongest assumption I’m making that could be wrong? What would a skeptic say is missing? What would make this genuinely useful rather than just competent?
Variables to fill in:
- [paste it] — The full text of the AI’s previous output, copied back into the prompt.
What to expect: A list of weaknesses, hidden assumptions, and skeptic objections. If the AI says “this is great, here are some minor tweaks,” push harder—it’s defaulting to flattery.
Follow-up prompt
Now rewrite the output addressing the three weakest points specifically. Don’t apologize or restate the critique—just produce the stronger version.
Prompt 4: Reframe
Best for: Breaking out of a stale answer. A true reframe changes the viewpoint, not just the phrasing.
Use with: Any LLM
Prompt 4
Try one of these reframes:
- Explain this to someone who thinks it won’t work. What would they say is the fatal flaw?
- Rewrite this from the perspective of our most skeptical customer.
- What would this look like if our budget were cut in half and we had to prioritize?
- What’s the version of this that a competitor would be embarrassed to see us produce?
Variables to fill in:
- Pick one reframe at a time. Running all four at once produces a watered-down composite. The whole point is to commit to a single new angle of attack.
What to expect: A version of your output that approaches the same problem from a different position. The skeptical-customer reframe surfaces objections you missed; the competitor-embarrassment reframe usually exposes generic, safe phrasing.
Prompt 5: Keep asking
Best for: Iteration with surgical feedback. “Make it better” produces nothing—the more specific the note, the better the revision.
Use with: Any LLM
Prompt 5
This is close, but not there yet. Specifically: [name the exact problem with the output]. The audience is [describe with specifics, including what they’ve already tried or believe]. The constraint I didn’t mention earlier is [name it]. Rewrite [the specific section] with all of that in mind.
Variables to fill in:
- [name the exact problem with the output] — e.g., “the second paragraph reads like a brochure,” not “it feels off.”
- [describe with specifics, including what they’ve already tried or believe] — e.g., “founders who already tried HubSpot and found it bloated.”
- [name it] — Any constraint you held back—budget, regulatory, internal politics.
- [the specific section] — Name the part you want rewritten. Don’t ask for a full rewrite if only one paragraph is broken.
What to expect: A targeted revision that addresses the named problem. If the output still feels generic, the feedback wasn’t specific enough—diagnose the next round before re-prompting.
Prompt 6: Stop and think
Best for: The final quality check before you accept any output. This is where good work becomes exceptional.
Use with: Any LLM
Prompt 6
Before you show me the final version, grade your own output against this standard: [describe what excellent looks like for this task]. Tell me where it falls short and fix those areas first.
Variables to fill in:
- [describe what excellent looks like for this task] — One sentence. e.g., “An excellent version of this brief is one that a junior strategist could execute against without asking a single clarifying question.”
What to expect: A self-graded critique followed by a revised output. The critique itself is often more useful than the revision—it tells you whether the AI actually understood the bar.
System-level use: Cassady’s longer-term move was to set a custom instruction at the account level: “Don’t compliment me. Challenge me. Before you give me any answer, give me a reason to look at this from a different direction.” That bakes the spirit of the final step into every session.
Putting it together
SPARKS works as a sequence on a single output:
- S (Speak it out): dump messy thinking
- P (Pivot roles): let AI interview you
- A (Ask for more): force critique on the first draft
- R (Reframe): try a different angle
- K (Keep asking): iterate with surgical notes
- S (Stop and think): grade against an excellence standard
You won’t always need all six. A short task might be S → P → S. A campaign brief might cycle through A, R, and K several times. The order is the discipline—the count is the project.
Tips for better results
- Run the Hallway Test before you start. Imagine asking the question to the smartest intern in your company. If they’d push back, the prompt isn’t ready.
- Wear the “bullshit hat” before reading any output. AI defaults to flattery. The skeptic’s mindset is yours, not the AI’s—deliberately put it on before evaluating.
- Hold the line on the interview step. The single biggest failure mode is asking AI to interview you and then telling it to generate anyway. Answer the questions first.
- Be specific with reframes. “Reframe this” without naming the new angle gives you a paraphrase, not a real reframe.
Related
More data from the AI Lab.