Most marketers using AI are producing more—and presumably better—work. That’s actually the problem.

AI is evolving

If everyone is producing better work, better” is the new average. The edge now belongs to people who use AI to design systems, not just prompts. 

Bryan Cassady has spent more than two decades helping organizations move ideas from strategy to execution. He’s an innovation professor, keynote speaker, and author of The Generative Organization AI Playbook, a 26-chapter field guide co-built with 34 experts and tested against more than 5,400 generated ideas that he’s offering for free download. His observation about where most marketers stall is direct: The ones building real advantage are using AI to go deeper, not just faster.

His method for getting there is called SPARKS.

SPARKS: the 6‑step method, with prompts you can copy

Before you open any AI tool, Cassady recommends a quick diagnostic: The Hallway Test.

Imagine walking down the hallway to the smartest new intern in your company. Ask them the question you were about to type.

Write a launch email,” fails immediately. A smart intern would push back: Who is it for? What’s at stake? What are they choosing between? Why now? AI behaves exactly the same way. Vague input produces competent but generic output. The Hallway Test forces the clarity that makes everything downstream work.

SPARKS solves the Hallway Test.

S—Speak it out

Get your thinking on paper before you expect AI to help with it. AI can’t clarify something you haven’t articulated yet. The act of saying it—even messily—is the work.

Try this prompt:

I’m working on [describe your project or challenge in 2–3 sentences]. Here’s everything I know, everything I’m unsure about, and what I’m actually trying to figure out: [dump it all without editing]. Help me clarify my thinking before we do anything else.

Where people go wrong: Presenting a polished version of your thinking. The whole point of this step is the mess. Rough input produces better clarification.

P—Pivot roles

Before AI generates anything, make it interview you. Most generic outputs come from AI filling in blanks you should have filled yourself. The fix is to stop the generation and flip the dynamic.

Try this prompt:

Before you write anything, I need you to interview me. Ask me [3–5] questions about my audience, my objective, my constraints, and what success looks like. Do not generate anything until I have answered all your questions.

Where people go wrong: Adding “…but also go ahead and write a draft” at the end. That defeats the purpose. Hold the line on the interview.

A—Ask for more

First responses are starting points. AI will produce something competent and complete-looking. Most of it is mediocre. You need to force the critique.

Try this prompt:

Here’s your output: \[paste it\]. Now tell me: What are the three weakest points? What’s the strongest assumption I’m making that could be wrong? What would a skeptic say is missing? What would make this genuinely useful rather than just competent?

Where people go wrong: Asking is this good?’ AI will almost always say yes. Ask what’s weak, not whether it’s strong.

R—Reframe

Ask the same question the same way and you get the same answer—true of humans, true of AI. Reframing means changing the viewpoint, not just the phrasing.

Try any of these:

Explain this to someone who thinks it won’t work. What would they say is the fatal flaw? 

Rewrite this from the perspective of our most skeptical customer. 

What would this look like if our budget were cut in half and we had to prioritize? 

What’s the version of this that a competitor would be embarrassed to see us produce?

Where people go wrong: Using reframe’ as a synonym for try again.’ A true reframe changes the angle of attack entirely.

K—Keep asking

Iteration isn’t prompting the same thing again. It’s reading what came back, identifying exactly what’s missing, and naming it.

Try this prompt:

This is close, but not there yet. Specifically: [name the exact problem with the output]. 

The audience is [describe with specifics, including what they’ve already tried or believe]. 

The constraint I didn’t mention earlier is [name it]. 

Rewrite [the specific section] with all of that in mind.

Where people go wrong: Vague feedback. Make it better’ produces nothing. The more surgical the note, the better the revision.

S—Stop and think

Beauty lives in the pause, not in the progress,” Cassady says. Before you accept any output, write one sentence describing what excellent” looks like for this specific task. Then evaluate the output against that sentence.

Try this prompt:

Before you show me the final version, grade your own output against this standard: [describe what excellent looks like for this task]. Tell me where it falls short and fix those areas first.

How SPARKS scales beyond a single project

SPARKS is the method for a single output—a blog post, a landing page, a campaign brief. It helps you think with AI at the level of one task

system lives one level up: It’s the way your sources, tools, and prompts connect, so every important task benefits from the same discipline, not just the one you happen to be working on today. 

SPARKS works beautifully for a single blog post; your entire GTM strategy needs a repeatable knowledge architecture, a structured system of sources, tools, and workflows that turns scattered documents (books, internal docs, transcripts, campaigns) into a reusable innovation library” you can query, summarize, and build on.

Cassady uses 90 days as his standard adoption horizon. It’s long enough to build the habit, short enough to hold focus. His 90-Day GenOrg GPT tool walks teams through the same five-step build in a structured sequence.

A starter system in five steps

  • Define your objective first. A useful forcing question: What is the single most important thing my system should be able to answer six months from now?” That one question determines what research matters, what documents are worth uploading, and what outputs are useful. Without it, you’re building a faster search engine, not a smarter thinking tool.
  • Do external research against that objective. Cassady’s tool stack is specific: NotebookLM as the standing library, Gemini connected to that notebook for deeper synthesis, and Perplexity for anything recent (because it surfaces citations that other tools bury). He keeps a permanent notebook seeded with canonical sources, then spins up a focused second notebook for each live challenge, reusing those PDFs but adding project-specific files. The point is to start with the three or four questions that matter most.
  • Upload internal knowledge. This is where most people underinvest. Cassady is specific about what goes in: proposals that won, frameworks your team actually uses, customer interview transcripts, past campaigns, and decision rules. The more specific to your actual work, the better the output. One warning: be selective about what you include. 

If you upload content that was AI-generated, you’ve infected your system with something that sounds nothing like the actual human you’re trying to build around,” Cassady says.

  • Generate ideas in bulk. Cassady uses tools that produce hundreds of variations against a defined objective in about 15 minutes. The goal isn’t to use every idea. It’s to have enough options so the right answer becomes obvious rather than chosen by default.
  • Refine continuously—and use SPARKS at every step. Every output that goes back into the system gets the same six-step SPARKS treatment. The system gets sharper as the inputs get better. This is not a one-time build. It’s a habit.

Two traps SPARKS is designed to interrupt

The first is confirmation bias. 

Be ready for something to disagree with you,” Cassady says. If you’re only looking for things that confirm what you already think, you’re not actually bringing in any external information.”

The second is what he calls the bull$hit hat” problem. AI will tell you the idea is brilliant, the draft is strong, the strategy is sound. Cassady’s solution: Put on your bullshit hat before you read any AI output. The hat is a metaphor. It’s the skeptic’s mindset you deliberately wear, the same skepticism a tough editor or a demanding client would bring. You’re wearing it, not the AI. Without it, flattery looks like validation.

He changed his own AI instructions to open every session with:

Don’t compliment me. Challenge me. Before you give me any answer, give me a reason to look at this from a different direction.

Bryan Cassady

AI feedback, he’s noticed, tends to land differently than the human version. 

If a person tells me I’m wrong, I feel it,” he says. If AI tells me, I think okay, maybe it’s got a point.” Research on this is genuinely mixed. Some studies suggest machine criticism feels less socially threatening, while others find it feels less fair. What Cassady is describing is probably both things at once.

Where you are: A three-level maturity ladder

SPARKS is a Level 1 system. There are two more levels above it. Most marketers are on the first rung. Knowing which rung you’re on—and what signals tell you to move—is the whole game.

Level 1: Prompt engineering

What it looks like: You’re writing requests and iterating. Your outputs are decent but inconsistent. You spend more time revising AI output than you expected.

Signal you’re ready to move up: You’re spending more time fixing AI output than you would have spent doing the work yourself.

How SPARKS fits: SPARKS is the systematic version of Level 1. It transforms random prompting into a repeatable method. Master SPARKS before you move up.

Level 2: Context engineering

What it looks like: Before you type a single prompt, you give AI the full picture: objective, audience, constraints, tone, and what success looks like. You’ve stopped re-explaining yourself mid-conversation.

Signal you’re ready to move up: Your first outputs are reliably good, but they feel like they’re meeting the bar rather than exceeding it.

How SPARKS fits: The S’ (speak it out) becomes your context-loading step. You front-load everything SPARKS taught you to develop through conversation. You’ve internalized the method.

Level 3: Results engineering

What it looks like: Before you prompt anything, you define what excellent’ looks like and give that definition to AI as an evaluation standard. AI grades its own output before you see it.

Signal you’ve arrived: You’re spending more time on strategy than on revision. AI is handling the quality control.

How SPARKS fits: This is the final S’ (stop and think) operating at the system level, not just on individual outputs. You’ve built the discipline into the process itself.

Where do you land in the AI maturity ladder?

Find out if you’re a Beginner, Developing, Intermediate, Advanced, or Expert-level AI user. Then, learn how to move up the ladder.

Related

More data from the AI Lab.