Workflow accelerationFIG_102Module 0222 min

Futurelab AI School

Prompt Engineering Best Practices

You will be able to turn vague intent into a clear AI brief and improve the output without starting again.

02

Lesson brief

What this module really teaches.

Brief, constrain, critique, reuse

Prompting is professional briefing. The model is powerful, but it cannot read the situation you have not described.

A strong prompt tells the AI the task, audience, context, source material, constraints, output shape, quality bar, and verification step. The result is less generic because the instruction is less generic.

Prompting is not a trick language. It is the discipline of briefing another intelligent worker. If the task, audience, source material, constraints, and quality bar are unclear, the model fills gaps with averages.

The best prompts are practical briefs. They tell the AI what role to play, what material to use, what output to produce, what to avoid, and how success will be judged. Good prompting also includes revision: critique, compare, improve, and verify.

Futurelab field note

In workshops, we teach prompting as a reusable work habit. The learner writes a weak prompt, then turns it into a brief. The difference is immediate: less fluff, better structure, clearer assumptions, and fewer invented details.

Futurelab method

The way to do the work.

Use this as the operating pattern for the module. It keeps AI practical, teachable, and reviewable.

01

Brief before asking

Write the task as if you are briefing a capable colleague: what matters, who it is for, and what would make the answer useful.

02

Give the output shape

Ask for a memo, table, rubric, checklist, slide outline, email, decision note, SOP, or critique. Shape reduces ambiguity.

03

Use examples when quality matters

If you know the tone, structure, or answer style you want, show a short example. Models follow patterns better when the pattern is visible.

04

Revise with evidence

Do not say 'make it better'. Name what failed: too long, too vague, unsupported, wrong audience, weak examples, missing caveats.

Core lessons

The ideas learners must own.

These are the concepts that let non-technical learners explain what they are doing and teach it back to someone else.

Concept 01

Context beats cleverness

Give the model the situation, audience, goal, sources, constraints, and examples. Clever prompt phrases matter less than useful context.

Concept 02

Specify the shape

Ask for a table, memo, checklist, email, slide outline, rubric, or decision note. If you do not define the output shape, the AI will choose one for you.

Concept 03

Critique before final

For important work, ask the model to identify missing information, weak logic, assumptions, and risks before drafting the final version.

Operating workflow

A repeatable sequence.

Follow this order during practice. The sequence is deliberately simple so learners can remember it under real work pressure.

  1. 01State the task in one sentence.
  2. 02Add audience, context, source material, and constraints.
  3. 03Define the output format and length.
  4. 04Ask for clarifying questions if the task is ambiguous.
  5. 05Ask for a critique or risk review.
  6. 06Revise once with specific feedback.
01

Board update

Turn scattered notes into a crisp update by specifying audience, decision context, length, risks, and evidence labels.

02

Client email

Ask for three versions with different tones, then critique them against the relationship and desired next action.

03

Research question

Ask the model to first identify missing context and assumptions before drafting the report.

Practice lab

Create a prompt playbook

Write five reusable prompts for summary, email, research, presentation, and risk review. Each prompt should include role, context, output format, and verification step.

Artifact fields

Prompt brief template

  • Role
  • Task
  • Audience
  • Context
  • Source material
  • Constraints
  • Output format
  • Verification

Starter prompt

Act as a practical prompt coach. Improve this prompt for a non-technical professional. Make it clear, specific, and reusable. Add sections for task, audience, context, source material, constraints, output format, and verification. Prompt: [paste prompt].

Quality bar

What good looks like.

Before leaving the module, compare the learner artifact against these standards and common failure modes.

01

Clear goal

The prompt says what job the answer must perform.

02

Useful context

The prompt includes audience, situation, source material, and constraints.

03

Defined format

The output can be reviewed quickly because its structure is known in advance.

04

Revision loop

The learner can improve the same prompt without starting over.

01

Vague verbs

Words like improve, analyze, summarize, or make professional need a target and standard.

02

No audience

A founder memo, student explainer, sales email, and internal SOP need different language.

03

No source boundary

When accuracy matters, the model must know what material it is allowed to use.

04

No verification request

Important outputs need a request for assumptions, missing evidence, and risks.

Tool categories

Tools to understand, not worship.

OpenAI and Anthropic prompting guidance both emphasize clear instructions, relevant context, examples, output format, and iteration. This lesson translates that guidance into non-technical work habits.

ChatGPTClaudeGeminiteam prompt docsprompt libraries

Completion

The work that proves the lesson landed.

Module to-dos

Finish the artifact

0/4 complete

FAQ

Questions learners usually ask.

Is prompt engineering still useful as models improve?

Yes. Better models reduce tricks, but they still need context, goals, examples, and review criteria.

Should prompts be long?

They should be complete, not bloated. Add context only when it changes the answer.

What if the first answer is bad?

Tell the model what failed, add missing context, ask for alternatives, and revise. Do not restart blindly.