Building a Security Culture
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 5: Building a Security Culture
The most technically correct AI security policy in the world fails if people route around it.
IBM’s 2025 Cost of Data Breach Report found that 97% of breached organisations lacked AI access controls. Cisco’s research found that 43% of employees use AI tools at work without telling their manager. These numbers do not reflect a workforce that is careless — they reflect a workforce that has decided the productive benefits of AI outweigh the risk of non-compliance with policies they experience as obstacles.
If you are responsible for a team, your security problem is not primarily technical. It is cultural. The question is how to make secure behaviour the path of least resistance rather than the bureaucratic alternative to just doing things the convenient way.
Why Bans Fail
The reflex response to AI security risk, especially after a high-profile incident, is to ban the tools. Samsung did it after the semiconductor leak. JPMorgan Chase, Goldman Sachs, and Bank of America banned ChatGPT in February 2023. Many of those bans were eventually softened or replaced with more nuanced policies as the organisations recognised that a blanket ban simply moved AI use underground.
Banning a tool that people find genuinely useful does not make the underlying need go away. If ChatGPT helps someone draft a report in 20 minutes that would otherwise take two hours, banning ChatGPT means that person will either use it anyway from their phone or personal account (creating a shadow IT problem with no visibility), or stop getting the benefit and become less productive (creating a retention and morale problem), or start using a different tool that may have worse security properties (solving nothing).
The goal is not zero AI use. The goal is AI use that does not create avoidable risk.
The Policy Conversation
Before writing a policy, have the conversation. With your team, not at your team.
The questions worth discussing:
What are people actually using? You will likely be surprised. Shadow IT surveys consistently find broader AI tool adoption than managers assume. People are using tools they do not mention because they expect a negative reaction. Finding out what is actually in use — without punishing the disclosure — gives you an accurate picture to work from.
What are they using it for? There is a significant difference between “I use Grammarly-style grammar checking” and “I paste client proposals into ChatGPT to improve them.” The former is low risk. The latter depends entirely on which client, what the proposal contains, and which tier of ChatGPT.
What would they need to stop doing it the risky way? Often the answer is: access to an enterprise tool that does the same thing. If your organisation provides a Microsoft 365 subscription, Copilot for Microsoft 365 has data handling terms that consumer ChatGPT does not. If people are using consumer ChatGPT for document work because they have no alternative, the policy question is partly a procurement question.
Writing a Policy That Works
The principles that make AI use policies effective:
Be specific about the risk, not just the rule. “Don’t enter confidential information into AI tools” is harder to follow than “Don’t enter client names, specific financial figures, or unpublished strategy documents into consumer-tier AI tools.” The first requires judgement about what counts as confidential. The second gives clear examples people can apply without asking for clarification every time.
Create a tiered framework rather than a binary. Distinguish between approved tools (enterprise tier, data handling agreement in place), conditional tools (consumer tier, acceptable for specific use cases that are listed), and prohibited tools (no acceptable use case for work purposes). Most organisations land on two categories: approved for specific tasks, and not approved. This is more actionable than a blanket ban and more protective than blanket permission.
Specify what “sensitive” means in your context. Every organisation has specific categories of information that are especially sensitive: the client list, the pricing model, the technical architecture, the acquisition target. Make these explicit. “Sensitive information” is ambiguous. “Do not enter client names, signed contract terms, or the contents of the [specific folder]” is not.
Include a reporting path, not just prohibitions. People need to know what to do if they think they may have made a mistake. If the only thing the policy says is “don’t do this,” then someone who has already done it has no good option — they can self-report and face consequences, or they can say nothing and leave a potential data issue unaddressed. A no-blame reporting path for incidents encourages early disclosure, which limits damage.
A Minimal Workable Policy
For a small team without a security function, this is the minimum viable AI use policy:
AI Tool Use Policy — [Team/Organisation Name]
Approved tools for work use: [List specific tools and tiers — e.g., “Claude Pro, Microsoft Copilot for M365”]
Conditionally approved: [Consumer-tier tools for specific use cases — e.g., “ChatGPT free tier for drafting only, using no client or project-specific information”]
What not to enter into any AI tool (regardless of tier):
- Client names, contact information, or project details
- Unpublished financial data
- Source code from internal systems
- Employee information
- Anything marked confidential or restricted in our document classification system
Verification rule: Any unusual financial request, credential reset, or access grant received via email, message, or voice call — regardless of how credible it appears — requires verification by calling back on a number from our own contact records. Not a number provided in the request.
If you think something went wrong: Contact [name/email] without delay. No-blame reporting. Early disclosure limits damage.
This takes under an hour to write and covers the most common failure modes.
The Manager’s Specific Responsibilities
If you manage people who use AI tools, your responsibilities beyond the policy itself:
Model the behaviour. If you are openly using consumer ChatGPT to process client materials while telling your team not to, the policy is decorative. What you do visibly sets norms more effectively than what you write.
Remove the friction from the safe option. If using the approved enterprise tool requires three extra steps, a separate login, and a slower interface, people will use the convenient one. Reducing friction on the approved path is infrastructure work, but it directly drives compliance.
Have the conversation when something goes wrong. The instinct when a team member reports a potential data mistake is to address the mistake and move on. The conversation about why it happened — what they were trying to accomplish, what made the risky option seem reasonable — is more valuable for preventing the next one.
Stay current. The threat landscape has changed significantly every six months for the past two years. A policy written in early 2023 is probably missing attack types that emerged later. A review once or twice a year, using the framework in Module 2 as a reference, is realistic and sufficient.
The Harder Conversation
There is a version of this conversation that gets into values as much as policy, and it is worth having explicitly.
AI tools offer genuine productivity benefits. The people on your team who are using them are, in many cases, trying to do their jobs better. The security risks are real, but so is the cost of treating every AI use as suspect. A culture of paranoia about tools will slow you down and signal distrust of people’s judgement.
The balance that works: high trust on the vast majority of tasks where AI use is completely fine, combined with specific, non-negotiable rules on the narrow set of scenarios where the risk is real. Most AI use by most people most of the time is not a security concern. The cases where it is — client data in consumer tools, financial verification skipping channels, credentials in repositories — are specific enough to be named.
Security culture is not about making people afraid of the tools. It is about making sure that the handful of decisions that carry real risk get the attention they deserve, so that everything else can proceed without friction.
That is the full course. The practical summary is in Module 4. The framework for understanding why the risks exist is in Modules 1 through 3. And this module is the bridge to the part of the problem that is not yours to solve alone — it lives in the shared habits of the people you work with.
The threat landscape will continue changing. The principles — verify through channels you initiate, separate work from personal AI accounts, treat AI tools as publicly visible megaphones for whatever you type into them — are stable enough to build habits around.
Check Your Understanding
Answer all questions correctly to complete this module.
1. Why does banning AI tools entirely tend to fail?
2. What does the chapter recommend a policy include beyond prohibitions?
3. What is the goal of AI security culture?
Pass the quiz above to unlock
Save failed. Please try again.