Lesson brief
What this module really teaches.
Trust slowly, verify, protect people
AI fluency without judgment is fragile. A learner must know what to verify, what not to paste, when to disclose AI use, and when a human remains responsible.
The practical rule is simple: trust slowly. Use AI to move faster through low-risk work, but slow down around people, money, law, health, employment, customers, private data, and public claims.
AI can make work faster and better, but fluency without judgment is dangerous. The final module teaches learners what to verify, what not to paste, when to disclose AI use, and how to keep human responsibility visible.
The practical rule is simple: trust slowly. Verify anything that affects money, law, health, employment, customers, public claims, reputation, safety, or private data. Use AI to think better, not to escape accountability.
Futurelab field note
Futurelab safety training is intentionally practical. We do not begin with abstract ethics. We begin with everyday behavior: what can I paste, what must I verify, who approves, and how do I explain uncertainty?
Futurelab method
The way to do the work.
Use this as the operating pattern for the module. It keeps AI practical, teachable, and reviewable.
Classify risk first
Before using AI, decide whether the work is low-risk, review-required, or approval-required.
Use data boundaries
Create red, yellow, and green rules for what can be pasted into AI tools.
Verify important claims
Ask for sources, compare against originals, and mark uncertainty where evidence is incomplete.
Keep responsibility human
If the output affects people, opportunity, access, money, reputation, or safety, a person owns the final decision.
Core lessons
The ideas learners must own.
These are the concepts that let non-technical learners explain what they are doing and teach it back to someone else.
Hallucination discipline
AI can sound confident while being wrong. Ask for sources, compare with originals, and label uncertainty.
Sensitive data rules
Passwords, contracts, personal data, client secrets, employee information, medical details, and confidential strategy need clear handling rules.
Human responsibility remains
If an AI output affects people, opportunity, access, money, reputation, or public claims, a person must own the decision.
Operating workflow
A repeatable sequence.
Follow this order during practice. The sequence is deliberately simple so learners can remember it under real work pressure.
- 01Classify task risk before using AI.
- 02Decide what data is safe to share.
- 03Ask for sources on factual claims.
- 04Verify high-impact claims against originals.
- 05Check for bias, privacy, security, and overreliance.
- 06Document what AI helped with and who approved the final output.
Public blog post
Use AI for structure and edits, then verify every factual claim and disclose where needed.
Hiring support
Use AI to organize criteria or draft interview notes, not to make autonomous employment decisions.
Client strategy
Remove sensitive data, keep source notes, and require human approval before sharing.
Practice lab
Create your AI safety operating guide
Write a one-page guide for your role: what you can use AI for, what needs approval, what data is forbidden, and what must be verified.
Artifact fields
AI safety operating guide
- Allowed uses
- Forbidden data
- Review-required work
- Approval-required work
- Verification checklist
- Disclosure rule
- Owner
- Escalation
Starter prompt
Review this AI-assisted work for safety. Classify risks around accuracy, privacy, bias, security, legal or reputation impact, and overreliance. Mark what must be verified, what data should be removed, what needs human approval, and where uncertainty should be disclosed. Work to review: [paste].Quality bar
What good looks like.
Before leaving the module, compare the learner artifact against these standards and common failure modes.
Risk-labeled
The learner can explain why the use case is low, medium, or high risk.
Privacy-aware
Sensitive information is removed or handled under approved tools and policies.
Verified
Important claims are checked against reliable sources.
Accountable
A named human owns the final output and decision.
Confusing confidence with truth
AI can be fluent and wrong at the same time.
Pasting sensitive data casually
Convenience does not remove privacy obligations.
Over-automating judgment
Human-impacting decisions need human accountability.
No disclosure norm
Teams should decide when AI assistance should be visible.
Tool categories
Tools to understand, not worship.
NIST AI RMF, NIST generative AI profile work, and OWASP guidance make risk management concrete. This module turns that into everyday learner behavior.
Completion
The work that proves the lesson landed.
Finish the artifact
FAQ
Questions learners usually ask.
What should always be verified?
Law, money, health, employment, public claims, customer commitments, private data, safety, and reputation.
What is prompt injection?
A hidden or malicious instruction that tries to make an AI system ignore its real task or misuse tools.
How should beginners think about ethics?
Ask who is affected, what data is used, what could go wrong, who reviews it, and whether AI use should be disclosed.