Workflow accelerationFIG_109Module 0925 min

Futurelab AI School

AI Agents for Non-Technical People

You will be able to explain agents clearly and design one safe agent workflow with human approval.

09

Lesson brief

What this module really teaches.

Goals, tools, approvals, logs, rollback

An agent is a workflow with a brain, tools, permissions, memory, and a reporting obligation. It is not magic, and it is not automatically safe.

Non-technical teams should understand agents through boundaries: what the agent can read, what it can draft, what it can change, when it must ask, what it logs, and how a human can stop or reverse it.

An agent is not just a chatbot with a dramatic name. A useful agent can pursue a goal, reason through steps, use tools, keep track of progress, and report what happened. That makes it powerful, but also riskier than a normal prompt.

Non-technical people should evaluate agents as workflows. What can it read? What can it change? What must it ask before doing? What log will it leave? What happens if it gets confused? A safe agent is designed around boundaries.

Futurelab field note

In Futurelab sessions, we do not start agents with external actions. We start read-only: research, summarize, draft, classify, prepare. Only after trust is built do teams consider write actions, and even then with approvals.

Futurelab method

The way to do the work.

Use this as the operating pattern for the module. It keeps AI practical, teachable, and reviewable.

01

Begin read-only

The safest first agent researches, classifies, summarizes, prepares, or drafts without changing external systems.

02

Design permissions as levels

Separate read, draft, approval-required, and allowed action. Most teams should spend time in the first three levels before enabling action.

03

Write tripwires

The agent should stop for sensitive data, low confidence, public impact, unexpected tool use, cost spikes, or irreversible changes.

04

Demand a final report

Every run should say what it read, what it did, what it changed, what failed, what is uncertain, and what a human should review.

Core lessons

The ideas learners must own.

These are the concepts that let non-technical learners explain what they are doing and teach it back to someone else.

Concept 01

Observe, plan, act, check, report

This is the plain-English loop. The agent reads the situation, makes a plan, uses tools, checks results, and explains what happened.

Concept 02

Permissions are the design

Read-only, draft-only, approval-required, and allowed action are different levels. Most early agent workflows should stay read-only or draft-only.

Concept 03

Logs create trust

A person should see what the agent did, what tools it used, what it changed, what it could not verify, and what needs review.

Operating workflow

A repeatable sequence.

Follow this order during practice. The sequence is deliberately simple so learners can remember it under real work pressure.

  1. 01Name the recurring workflow.
  2. 02Write the goal and success condition.
  3. 03List inputs and tools the agent can access.
  4. 04Define read, draft, approval, and action permissions.
  5. 05Set tripwires for sensitive data, low confidence, cost, public impact, or unexpected tool use.
  6. 06Require a final report, human approval, and rollback path.
01

Research prep agent

Collect background on a company, summarize sources, identify open questions, and prepare a meeting brief.

02

Documentation agent

Find outdated SOPs, draft updates from source material, and ask an owner to approve changes.

03

Operations follow-up agent

Read meeting notes, draft task assignments, and wait for approval before sending anything.

Practice lab

Design a safe agent workflow

Create one agent map for research, operations, sales prep, documentation, hiring support, or meeting follow-up.

Artifact fields

Safe agent blueprint

  • Goal
  • Inputs
  • Tools
  • Permissions
  • Plan
  • Tripwires
  • Report
  • Human approval

Starter prompt

Design a safe AI agent workflow for [recurring task]. Include goal, inputs, tools, read/write permissions, observe-plan-act-check-report steps, approval gates, tripwires, logs, fallback path, success metrics, and what the human reviews before anything is sent or changed.

Quality bar

What good looks like.

Before leaving the module, compare the learner artifact against these standards and common failure modes.

01

Bounded

Inputs, tools, permissions, and forbidden actions are explicit.

02

Inspectable

A human can see the plan, tool use, outputs, and uncertainty.

03

Approval-aware

The agent asks before external or high-impact actions.

04

Recoverable

There is a rollback or correction path if something goes wrong.

01

Calling every chatbot an agent

Agents are about tool use, planning, and action boundaries, not branding.

02

Starting with write access

Untrusted workflows should not begin by changing live systems.

03

No stop conditions

A useful agent needs tripwires, not just goals.

04

No logs

Without logs, trust cannot accumulate.

Tool categories

Tools to understand, not worship.

Agent products and SDKs now emphasize tool use, handoffs, tracing, and guardrails. This module translates those concepts into plain-English workflow design.

ChatGPT appsOpenAI Agents SDK conceptsMicrosoft Copilot StudioZapier Agentsn8nRelevance AIMake

Completion

The work that proves the lesson landed.

Module to-dos

Finish the artifact

0/5 complete

FAQ

Questions learners usually ask.

What is the difference between automation and an agent?

Automation follows a fixed path. An agent can decide steps and use tools inside boundaries.

What should agents not do first?

Do not start with money movement, public posting, HR decisions, legal actions, or irreversible data changes.

How do I present agents to leadership?

Show the workflow: input, tools, approvals, logs, output, risk, and measurable time saved.