Prompts to build an AI analytics dashboard
This resource is based on He built a $10 dashboard that does what used to cost $1,000 a month, featuring Emanuel Cinca of Stacked Marketer, published on the AI Lab by ActiveCampaign.

Get the prompt template
This LLM prompting guide is based on He built a $10 dashboard that does what used to cost $1,000 a month, featuring Emanuel Cinca of Stacked Marketer published on the AI Lab by ActiveCampaign.
How to use these prompts
These are ready-to-use prompts based on Emanuel Cinca’s conversational approach to building a custom analytics dashboard with Google Gemini. Copy, paste, and swap in details where you see [BRACKETS]. Every AI tool and model behaves a little differently, so treat what comes back as a starting point—review the output and refine from there.
Putting it together
These prompts follow the sequence Cinca used: from an open-ended feasibility conversation through code generation, deployment, and debugging. The output of each step feeds the next—the feasibility outline becomes your roadmap for the code prompt, the code becomes the artifact you share in the deployment and debugging prompts. Work through them in order on a first build.
Prompt 1: Scope the project with AI
- Best for: Starting the conversation—confirming whether your reporting problem can be solved with code before writing a line of it
- Use with: Google Gemini (Cinca’s tool of choice) / Any LLM
These examples are from Cinca’s actual usage—replace with your own reporting context.
Scope the project
I have this issue where we do [DESCRIBE YOUR REPORTING TASK] manually. I think it should be possible to do with data. Is it possible? How can you help me with this?
Specifically, I want to:
- Pull this data automatically via the [PLATFORM NAME] API
- Process it to show [SPECIFIC METRICS YOU NEED—e.g., cost per lead, churn rate, campaign performance]
- Display it in a simple web dashboard my team can access
I have no coding background beyond basics. Don’t write code yet—outline the high-level approach first.
Variables to fill in:
- [DESCRIBE YOUR REPORTING TASK]—e.g., “pull subscriber data from our email platform, calculate churn rates by campaign source, and create weekly performance reports”
- [PLATFORM NAME]—e.g., “ActiveCampaign,” “Mailchimp,” “HubSpot”
- [SPECIFIC METRICS YOU NEED]—e.g., “true cost per lead including churn, campaign-level ROI, subscriber cohort analysis”
What to expect: A confirmation that it’s feasible, plus a 4–6 step outline of the approach. The AI should mention API calls, data processing (likely Python), and hosting options.
Cadence note: One-time setup conversation. Once you have the outline, move to Prompt 2.
Prompt 2: Generate the data-pulling code
- Best for: Getting the AI to write the backend Python script that connects to your platform’s API and calculates the metrics you need
- Use with: Google Gemini / Any LLM
Generate data-pulling code
I want to build a Python script that pulls subscriber data from the [PLATFORM NAME] API. Here’s what I need:
Data to pull:
- [DATA FIELD 1—e.g., “subscriber email”]
- [DATA FIELD 2—e.g., “subscription date—specifically the ‘date_added’ field, which is when they first joined the list”]
- [DATA FIELD 3—e.g., “unsubscribe date—the ‘unsubscribe_date’ field, only populated if they’ve left the list”]
- [DATA FIELD 4—e.g., “UTM source from their signup”]
Important: when I reference “the date,” I mean [SPECIFIC DATE FIELD AND ITS EXACT FIELD NAME IN THE API]—not any other date field.
The script should:
- Authenticate with the API using an API key
- Pull all subscribers between [START DATE] and [END DATE]
- Calculate [METRIC—e.g., “churn rate per UTM source: what percentage of subscribers from each source unsubscribed within 14 days of joining”]
- Output the results as a structured dataset I can display in a table
Write the complete Python script with comments explaining each section.
Variables to fill in:
- [PLATFORM NAME]: your email or marketing platform
- [DATA FIELDS]: be as specific as possible; name the exact API field for each data point
- [SPECIFIC DATE FIELD]: e.g., “the ‘date_added’ field, which records when they first joined the list”—Cinca learned that “the date” can mean three different things in subscriber data
- [METRIC]: the calculation you want automated, written out in plain language
What to expect: A complete Python script with API authentication, data fetching, processing logic, and output formatting. Cross-reference the field names against your platform’s API documentation before running it.
Follow-up prompt:
The script runs but [DESCRIBE THE PROBLEM: e.g., “the churn rate seems too high” or “it’s returning an error on line 47”]. Can you look again through the code—what could be wrong? Here’s the error output: [PASTE ERROR LOG]
Prompt 3: Set up hosting and deployment
- Best for: Getting step-by-step instructions to deploy your code so your team can access the dashboard via a browser
- Use with: Google Gemini / Any LLM
Hosting and deployment setup
I’ve written a Python script that pulls and processes data from [PLATFORM NAME]‘s API. Now I need to:
- Host this somewhere my team can access via a web browser
- Set up a simple web dashboard that displays the data in interactive tables
- Add authentication so only my team can access it—we use [Google Workspace / Microsoft 365 / other SSO]
- Set it up on a subdomain: [analytics.yourdomain.com]
I have [no / basic / some] experience with hosting and deployment. My budget for hosting is under $[AMOUNT] per month.
Give me step-by-step instructions. Assume I need you to explain every click and command.
Variables to fill in:
- [PLATFORM NAME]: your data source
- [Google Workspace / Microsoft 365 / other SSO]: your team’s existing login system; Cinca used Google Workspace, which eliminated the need for a separate login database
- [analytics.yourdomain.com]: your preferred subdomain
- [AMOUNT]: Cinca’s full dashboard runs on less than $10/month in Google Cloud hosting
What to expect: Detailed deployment instructions covering hosting platform setup, environment configuration, code deployment, and authentication. The AI will likely suggest a few options (Google Cloud, AWS, Railway, Render)—choose based on what your team already uses, not what the AI recommends first.
Cadence note: One-time setup. After deployment, the dashboard runs automatically—no recurring prompting required.
Prompt 4: Debug unexpected results
- Best for: When the dashboard produces results that seem wrong—Cinca’s systematic approach to AI-assisted debugging
- Use with: Google Gemini / Any LLM
Debug
I got this result and it seems wrong: [DESCRIBE THE UNEXPECTED OUTPUT—e.g., “the churn rate for Campaign A is showing 78%, which seems far too high based on what I know about that campaign”].
Here’s my current code: [PASTE CODE OR ATTACH FILE]
Can you look through the code again and tell me what could be wrong?
After the AI responds, it will often ask for more information. Share what it requests:
Request for more info
Here’s the error log / data sample it’s generating: [PASTE OUTPUT]
Variables to fill in:
- [DESCRIBE THE UNEXPECTED OUTPUT]—describe what you’re seeing and why it seems wrong; context about the expected range helps the AI narrow down the cause
- [PASTE CODE OR ATTACH FILE]—share your full current script
What to expect: The AI will walk through the code to identify likely causes, then ask for specific diagnostic information—an error log, a data sample, or the result of a test query. Share exactly what it asks for. Cinca describes the dynamic as having someone with “a very strong brain, but no hands and eyes”—you’re the hands, the AI is the brain.
Tips for better results
- Name exact API fields—Cinca learned that “the date” can mean three different things in subscriber data (subscription date, list-join date, unsubscribe date). Specificity prevents silent errors.
- Repeat yourself rather than assume context—more detail is always better than less.
- Test outputs against a known data sample before trusting the dashboard for decisions.
- Start each new AI session by pasting your existing code and saying review this code and understand what it does before I ask for changes—context doesn’t persist between sessions.
- Treat the first version as a learning project, not a mission-critical system. Cinca’s advice: “It’s never going to be perfect from the first go.”
Ready for the full story?
Read He built a $10 dashboard that does what used to cost $1,000 a month, featuring Emanuel Cinca of Stacked Marketer, published on the AI Lab by ActiveCampaign.
Related
More data from the AI Lab.