AI Security for Non-Technical People

Real attacks. Practical checklists. No jargon.

What You'll Learn

Most AI security advice falls into one of two camps: enterprise whitepapers written for security teams, or vague blog posts that tell you to "be careful." Neither helps someone who just wants to use AI tools at work without making a costly mistake.

This is the guide I wish existed when I started combining a security investigation background with daily AI tool use. It is written for people who already use AI — and want to understand the risks clearly enough to manage them, not avoid the tools entirely.

  • Real incidents — The $25.6 million Arup deepfake fraud. The Samsung semiconductor leak. The MGM casino attack. What actually happened, what the attackers did, and what would have stopped it.
  • How the attacks work — Prompt injection, voice cloning, AI-powered phishing, credential theft. Plain explanations of each attack type, not technical deep dives.
  • Your data exposure — What AI vendors actually see when you use their tools. The difference between free, paid, and enterprise tiers — and why it matters for what you share.
  • A practical checklist — What never goes into AI tools. How to handle API keys. What to look for before adopting a new tool. Things you can act on today.
  • Team guidance — How to set a sensible AI use policy without becoming the person who banned all the useful tools.

I have a CEH certification and spent years in security roles before moving into AI consulting. This is not a security textbook. It is the practical understanding that lets you make better decisions — the same understanding I apply to my own daily AI workflow.

Time commitment: Around 2 hours total. Module 4 is the most actionable — work through it with your actual tools in front of you.

  1. Why This Matters Now

    The threat landscape in plain language. Real incidents — including a $25.6 million video call fraud and the Samsung data leak — explain why everyone using AI needs to understand what changed in 2023.

  2. How AI Attacks Actually Work

    Prompt injection, data poisoning, deepfakes, voice cloning, AI-powered phishing. Each attack type explained with real examples — what the attacker does, why it works, and what makes AI different from what came before.

    Members only
  3. Your Data and AI

    What happens to what you share with AI tools. The difference between free, paid, and enterprise tiers. Training data concerns, data leakage, shadow IT. When local models make sense, when cloud is fine.

    Members only
  4. The Security Checklist

    What never goes into AI tools. How to manage API keys. Evaluating tools before you adopt them. Recognising AI-generated content. Specific steps you can take before you close this tab.

    Members only
  5. Building a Security Culture

    For team leads and managers. How to write AI use policies that people actually follow. The conversation to have with your team. Balancing access with safety without killing the productivity gains you adopted AI for.

    Members only