Browse docs

Explore by section, then jump directly into a page.

AI Analytics Chatbot - User Guide

Flash Analytics includes an AI-powered analytics copilot that answers questions about your data, generates charts, investigates metric changes, and explains what it finds.

Flash Analytics includes an AI-powered analytics copilot that answers questions about your data, generates charts, investigates metric changes, and explains what it finds, all from a conversational interface.

Table of Contents

  1. What the Chatbot Can Do
  2. Starting a Conversation
  3. Asking Analytics Questions
  4. Understanding Responses
  5. Investigation Mode
  6. Chart Generation
  7. Follow-up Prompts
  8. How Entity Resolution Works
  9. Chat History and Memory
  10. Caveats and Gotchas

1. What the Chatbot Can Do

The chatbot is a tool-using analytics copilot. It is not a general-purpose assistant. It is specifically wired into your project's event data and analytics engine.

CapabilityExample prompts
Answer direct analytics questionsHow many users signed up last week?
Generate charts and reportsShow me signups over the last 30 days broken down by country
Build funnelsCreate a funnel from add_to_cart to purchase
Investigate metric changesWhy is revenue down this week?
Analyze funnel drop-offWhere are users dropping out of the checkout flow?
Compare segmentsCompare conversion rate for users on iOS vs Android
Analyze behavior pathsWhat do users do after they view the pricing page?
Debug tracking issuesIs our tracking working properly?
Profile lookupShow me the journey for user john@example.com

It cannot modify experiment configuration, change settings, send data, or make decisions on your behalf. It is read-only over your analytics data.

2. Starting a Conversation

Navigate to Chat in the sidebar.

Type a question in the input field and press Enter. A new chat is created automatically and appears in the chat history list on the left.

Each chat is scoped to a project and persists across sessions. You can return to any past conversation and continue from where you left off.

Tip: The more specific you are, the better the results. "Show me conversions" is vague; "Show me purchase conversion rate by country for the last 30 days" gives the model enough detail to build an accurate query.

3. Asking Analytics Questions

Intent Modes

ModeTriggered byWhat happens
Simple queryHow many X happened?Runs a report and returns a chart or metric card.
ComparisonCompare X and Y or iOS vs AndroidRuns a segmented report with breakdown by the comparison dimension.
InvestigationWhy is X down? or What changed?Runs a structured investigation playbook across multiple data angles.
DebuggingIs tracking working? or Are events firing?Checks event recency, volume gaps, and missing properties.
BehaviorWhat do users do after X?Runs path analysis starting from the specified event.
RetentionWhat drives retention? or Who comes back?Analyzes retention cohorts and behavioral drivers.

You do not need to specify a mode. The model infers intent from your prompt.

Entity Resolution

Before querying data, the chatbot verifies that the event names and property names you mention actually exist in your project. It maps natural-language terms to real tracked events.

For example, "signups" can be resolved to the actual event in your schema, such as user_registered or signup_complete, based on your project's event catalog.

If the chatbot cannot confidently resolve a term, it asks a clarifying question before proceeding.

Caveat: New events that were only recently ingested may not yet appear in the catalog. The catalog is refreshed every 4 hours.

4. Understanding Responses

Text responses

Used for direct metric answers, status summaries, and explanatory context. Responses are rendered in markdown with headings, bullet points, and emphasis.

Chart responses

  • The chart type is chosen automatically based on the query.
  • Charts support the same interactions as the Reports section.
  • An Explain button appears below each generated chart.

Investigation cards

When the chatbot runs an investigation, it renders a structured card.

SectionDescription
SummaryOne-paragraph plain-language description of what was found.
ConfidenceHigh, Medium, or Low confidence in the finding.
FindingsRanked list of observations with supporting evidence.
Comparison periodsThe current vs baseline windows used in the analysis.
Recommended chartsCharts generated as evidence for the findings.
Follow-up promptsSuggested next questions shown as clickable chips.

5. Investigation Mode

Investigation mode runs a structured multi-step analysis rather than a single query. It is triggered for questions about metric changes, drop-off, and behavioral shifts.

Available investigation playbooks

PlaybookTrigger phrases
Metric changeWhy is X down or up? What changed with metric X?
Funnel drop-offWhere are users dropping? Why is checkout conversion low?
Slice comparisonCompare X for segment A vs B
Tracking debugIs tracking broken? Are events firing?
Behavior path analysisWhat do users do after X?
Retention driversWhat drives users to come back?

Drop-off investigation

  1. Builds a funnel from the steps you described, or infers them.
  2. Queries both the current period and a baseline period.
  3. Compares step-level drop-off rates between the two periods.
  4. Identifies the step where drop-off worsened the most.
  5. Returns a rendered funnel chart as evidence alongside the written finding.
Caveat: Investigation results are descriptive. They surface patterns and evidence, but they do not run statistical significance tests.

6. Chart Generation

When the chatbot generates a chart, it creates a report configuration behind the scenes and executes it against your ClickHouse event data.

Supported chart types

  • Line charts
  • Bar charts
  • Funnel charts
  • Conversion charts
  • Retention charts

Saving a generated chart

Chatbot-generated charts are shown inline in the conversation. To save a chart as a permanent report, click Open in Reports if available, or manually recreate it in Reports using the same configuration.

Caveat: Charts generated in chat are not automatically saved as persistent reports. They exist only within the conversation context.

7. Follow-up Prompts

Each investigation card includes suggested follow-up prompts as clickable chips at the bottom. Clicking one appends that prompt to the chat and triggers a new analysis.

You can also type follow-up questions manually. The chatbot remembers the context of the current conversation, so you do not need to repeat the event names or time window from the previous message.

Example conversation:
User: "Why is purchase conversion down this week?"
Chatbot: Investigation identifies step 2 drop-off as the driver.
User: "Show me step 2 by country"
Chatbot: Runs a country breakdown for the specific step.
Caveat: Context memory is scoped to the current conversation. Starting a new chat resets context. Very long conversations may compress early context.

8. How Entity Resolution Works

Before querying data, the chatbot verifies that event names and property names you mention exist in your project schema.

What gets resolved

  • Event names such as signups, checkouts, and purchases.
  • Properties such as plan, country, and source.
  • Metrics such as conversion rate and revenue.

How it works

  1. Your prompt is analyzed for analytics entities.
  2. The project catalog of known events and properties is searched.
  3. High-confidence matches are used automatically.
  4. Low-confidence matches trigger a clarification question.
  5. Unresolvable terms block the query until you clarify.

What happens when resolution fails

The chatbot will ask a clarification question rather than silently running a query on entities that do not exist.

Example: I couldn't find an event called signups in your project. Did you mean user_registered?

Caveat: Entity resolution depends on the ingested event catalog. The catalog is updated every 4 hours and has a maximum of 100,000 entries per project.

9. Chat History and Memory

Conversation history

All messages in a chat, including charts and investigation cards, are stored and reloaded when you return to a conversation.

Rolling summary

The chatbot maintains a rolling summary of each conversation. This summary is injected on each new message so the model can remember what was discussed earlier without replaying the full history.

What the chatbot remembers within a conversation

  • The events and properties discussed.
  • The last chart or investigation it generated.
  • Time windows and segments mentioned.

What it does not remember across conversations

  • Conversation summaries are not shared across different chats.
  • It does not remember preferences or patterns across all chats.

10. Caveats and Gotchas

Results are data-dependent

The chatbot queries your actual event data. If you have sparse data, recent tracking gaps, or misconfigured events, the outputs will reflect that.

No statistical significance

Investigation findings and comparisons are descriptive. The chatbot does not compute p-values, confidence intervals, or statistical significance scores.

New events may not be resolvable yet

The entity catalog is refreshed every 4 hours. Very recent events may not be resolved correctly until the next refresh.

The chatbot cannot modify data or configuration

The chatbot is read-only. It cannot create experiments, modify reports, change settings, or push data.

Chart context is scoped to the last analysis

Follow-up explanations and modifications reference the most recent chart or investigation in the conversation. If needed, re-describe the older chart you want to modify.

Long conversations may compress early context

The rolling summary compresses earlier messages to keep prompt size manageable. Starting a new chat for unrelated questions is recommended.