Workflow accelerationFIG_110Module 1024 min

Futurelab AI School

Safety, Ethics, Hallucination, and Glossary

You will leave with simple rules for safe, ethical, and reliable AI use.

10

Lesson brief

What this module really teaches.

Trust slowly, verify, protect people

AI fluency without judgment is fragile. A learner must know what to verify, what not to paste, when to disclose AI use, and when a human remains responsible.

The practical rule is simple: trust slowly. Use AI to move faster through low-risk work, but slow down around people, money, law, health, employment, customers, private data, and public claims.

AI can make work faster and better, but fluency without judgment is dangerous. The final module teaches learners what to verify, what not to paste, when to disclose AI use, and how to keep human responsibility visible.

The practical rule is simple: trust slowly. Verify anything that affects money, law, health, employment, customers, public claims, reputation, safety, or private data. Use AI to think better, not to escape accountability.

Futurelab field note

Futurelab safety training is intentionally practical. We do not begin with abstract ethics. We begin with everyday behavior: what can I paste, what must I verify, who approves, and how do I explain uncertainty?

Futurelab method

The way to do the work.

Use this as the operating pattern for the module. It keeps AI practical, teachable, and reviewable.

01

Classify risk first

Before using AI, decide whether the work is low-risk, review-required, or approval-required.

02

Use data boundaries

Create red, yellow, and green rules for what can be pasted into AI tools.

03

Verify important claims

Ask for sources, compare against originals, and mark uncertainty where evidence is incomplete.

04

Keep responsibility human

If the output affects people, opportunity, access, money, reputation, or safety, a person owns the final decision.

Core lessons

The ideas learners must own.

These are the concepts that let non-technical learners explain what they are doing and teach it back to someone else.

Concept 01

Hallucination discipline

AI can sound confident while being wrong. Ask for sources, compare with originals, and label uncertainty.

Concept 02

Sensitive data rules

Passwords, contracts, personal data, client secrets, employee information, medical details, and confidential strategy need clear handling rules.

Concept 03

Human responsibility remains

If an AI output affects people, opportunity, access, money, reputation, or public claims, a person must own the decision.

Operating workflow

A repeatable sequence.

Follow this order during practice. The sequence is deliberately simple so learners can remember it under real work pressure.

  1. 01Classify task risk before using AI.
  2. 02Decide what data is safe to share.
  3. 03Ask for sources on factual claims.
  4. 04Verify high-impact claims against originals.
  5. 05Check for bias, privacy, security, and overreliance.
  6. 06Document what AI helped with and who approved the final output.
01

Public blog post

Use AI for structure and edits, then verify every factual claim and disclose where needed.

02

Hiring support

Use AI to organize criteria or draft interview notes, not to make autonomous employment decisions.

03

Client strategy

Remove sensitive data, keep source notes, and require human approval before sharing.

Practice lab

Create your AI safety operating guide

Write a one-page guide for your role: what you can use AI for, what needs approval, what data is forbidden, and what must be verified.

Artifact fields

AI safety operating guide

  • Allowed uses
  • Forbidden data
  • Review-required work
  • Approval-required work
  • Verification checklist
  • Disclosure rule
  • Owner
  • Escalation

Starter prompt

Review this AI-assisted work for safety. Classify risks around accuracy, privacy, bias, security, legal or reputation impact, and overreliance. Mark what must be verified, what data should be removed, what needs human approval, and where uncertainty should be disclosed. Work to review: [paste].

Quality bar

What good looks like.

Before leaving the module, compare the learner artifact against these standards and common failure modes.

01

Risk-labeled

The learner can explain why the use case is low, medium, or high risk.

02

Privacy-aware

Sensitive information is removed or handled under approved tools and policies.

03

Verified

Important claims are checked against reliable sources.

04

Accountable

A named human owns the final output and decision.

01

Confusing confidence with truth

AI can be fluent and wrong at the same time.

02

Pasting sensitive data casually

Convenience does not remove privacy obligations.

03

Over-automating judgment

Human-impacting decisions need human accountability.

04

No disclosure norm

Teams should decide when AI assistance should be visible.

Tool categories

Tools to understand, not worship.

NIST AI RMF, NIST generative AI profile work, and OWASP guidance make risk management concrete. This module turns that into everyday learner behavior.

NIST AI RMFOWASP LLM Top 10company AI policycontent credentialssource checklistsred-team prompts

Completion

The work that proves the lesson landed.

Module to-dos

Finish the artifact

0/4 complete

FAQ

Questions learners usually ask.

What should always be verified?

Law, money, health, employment, public claims, customer commitments, private data, safety, and reputation.

What is prompt injection?

A hidden or malicious instruction that tries to make an AI system ignore its real task or misuse tools.

How should beginners think about ethics?

Ask who is affected, what data is used, what could go wrong, who reviews it, and whether AI use should be disclosed.