Lesson brief
What this module really teaches.
Questions, evidence, charts, synthesis, decisions
Analysis begins with a decision, not with a chart. AI is strongest when it knows what choice, recommendation, prioritization, or tradeoff the analysis is supposed to support.
The useful analyst habit is to separate evidence, interpretation, assumption, caveat, confidence, and next verification. That habit works across market research, internal data, customer feedback, and personal decisions.
Analysis starts with a decision, not a chart. AI can find patterns, summarize sources, compare options, and draft a decision memo, but it needs a clear question and a standard for evidence.
Learners should ask AI for both insight and humility: what seems true, what evidence supports it, what is missing, what the data cannot prove, and what should be verified next.
Futurelab field note
The strongest non-technical analysts in our workshops do one simple thing: they ask 'what decision will this change?' before asking AI for any chart or report.
Futurelab method
The way to do the work.
Use this as the operating pattern for the module. It keeps AI practical, teachable, and reviewable.
Turn topics into decisions
Replace 'analyze competitors' with 'which competitor threatens our next six months and why?'
Label evidence strength
Separate primary sources, official docs, customer quotes, model inference, and unsupported assumptions.
Ask for what would change the answer
Good analysis states what new evidence would make the recommendation weaker or stronger.
Write the memo last
Use AI to explore patterns first, then produce a concise decision memo with caveats and action.
Core lessons
The ideas learners must own.
These are the concepts that let non-technical learners explain what they are doing and teach it back to someone else.
Decision-led questions
Replace 'analyze this data' with 'what should we do about churn next month?' or 'which vendor looks safest based on these criteria?'
Evidence labels
Separate confirmed facts, model interpretation, assumptions, and open questions. This makes AI research easier to trust and easier to challenge.
Confidence is part of the answer
Every analysis should include confidence level, caveats, and what would change the conclusion.
Operating workflow
A repeatable sequence.
Follow this order during practice. The sequence is deliberately simple so learners can remember it under real work pressure.
- 01Write the decision and audience.
- 02List the available data or sources.
- 03Ask AI for patterns, comparisons, outliers, and caveats.
- 04Require source links or evidence notes for important claims.
- 05Ask what the evidence cannot prove.
- 06Convert the answer into a decision memo.
Vendor comparison
Create criteria, score vendors, list evidence, flag missing information, and recommend the next diligence question.
Customer feedback synthesis
Cluster feedback, quantify themes where possible, quote only real source text, and propose product actions.
Market scan
Use deep research for a cited report, then convert it into a one-page executive note.
Practice lab
Write a decision memo
Use a dataset or source pack to create a one-page memo with question, evidence, recommendation, caveats, confidence, and next step.
Artifact fields
Evidence-backed decision memo
- Decision
- Sources
- Findings
- Evidence
- Caveats
- Confidence
- Recommendation
- Next verification
Starter prompt
Act as a cautious analyst. Decision to support: [decision]. Audience: [audience]. Data or sources: [paste]. Produce a one-page memo with question, method, key findings, evidence, caveats, confidence level, recommendation, and what to verify next.Quality bar
What good looks like.
Before leaving the module, compare the learner artifact against these standards and common failure modes.
Decision-led
The analysis clearly supports a choice or next action.
Evidence-labeled
Facts, interpretation, assumptions, and open questions are visibly separate.
Caveated
The memo says what the data cannot prove.
Reusable
The output can be shared as a decision record, not just a chat answer.
Asking for generic analysis
Generic questions produce generic synthesis.
Mixing facts and guesses
AI can blur source evidence and model interpretation unless instructed.
No confidence level
Readers need to know how strongly to trust the answer.
Over-charting
Charts should clarify a decision, not decorate a report.
Tool categories
Tools to understand, not worship.
Research tools and deep research modes now produce richer cited reports, but learners still need to inspect sources, distinguish evidence from synthesis, and decide what should be verified.
Completion
The work that proves the lesson landed.
Finish the artifact
FAQ
Questions learners usually ask.
When should I use search vs deep research?
Use search for quick facts. Use deep research for multi-source synthesis and reusable reports.
Can AI analyze uploaded spreadsheets?
Yes, but you must confirm labels, missing data, formulas, and whether the conclusion fits the data.
What makes analysis usable?
A question, evidence, caveat, confidence level, recommendation, and next action.