How to sound like an AI legend in human risk
Beautiful renegades, mavericks, and deviants,
You probably get asked about AI every week …maybe even every day. I know I do.
“How’s it changing the world of human risk management?”
“What are you doing about it?”
“Is this a risk or an advantage?”
There was a time where I would have answered these questions with (well-meaning) long-winded explanations that (I soon realised) only bemused and befuddled.
That wasn’t the move.
Now, I have a different answer — it takes seconds, gets to the heart of the matter …and actually helps.
It allows whoever’s asking to see AI’s risks and advantages, without weighing them down with detail and semantics. Which is why I think it’s worth sharing here, so that you can use it too.
When I’m asked about AI, I talk about four lenses:
1. The attacker advantage
AI amplifies the capability of cybercriminals. It lowers the cost and raises the quality of attacks: deepfakes, voice cloning, hyper-personalized phishing, automated social engineering, and tools that write malicious code. This means the scale, sophistication, and believability of attacks are increasing, and so are the risks to your workforce.
2. The professional advantage
The same tools can make our work as human risk professionals more effective. AI helps us analyze risky behaviors faster, personalize interventions, automate communications, and generate more engaging training and comms. It speeds reporting, trend analysis, and measurement, so we can focus on what actually changes behavior instead of admin.
3. The workforce environment
AI is reshaping the workplace. For the better (in the main). But there’s a flipside — people are adopting generative AI for everyday work, often without a clear grasp of the risks (65% of participants now use AI, but 52% have zero training on its security or privacy risks. Source: Oh, Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2025–2026). That brings shadow IT, new data leakage paths, and new behavior patterns, along with anxiety, uncertainty, and sometimes overconfidence. It’s also changing the size and shape of the organization as teams evolve. We need to understand how this shift is affecting our people and tailor culture and awareness accordingly. AI is also changing the size and shape of our organizations - or it will soon - and soon we’ll all have human and non-human work colleagues.
4. Securing the organization’s AI
Your organization is almost certainly adopting AI. Securing AI systems now means getting governance, data protection, access control, ethical use, and resilience right, and making those controls real in day-to-day practice. Your expertise in behavior and culture is critical — from responsible adoption and safe use to embedding secure habits in AI development and deployment. Today - AI misuse is already a behavioral security challenge that demands a data-driven approach beyond traditional security awareness tactics and phishing sims. But the evolution won’t stop here, and we will soon need to start thinking about behavioral security for non-human identities as well as real human beings. Agentic AI does not invent behavior. It learns and mimics, which makes a deeper grasp of behavioral security more important than ever. (I’ll share more on this last point another time).
A line I keep coming back to… Marcus Aurelius (you know the one) wrote it to himself:
“The impediment to action advances action. What stands in the way becomes the way.”
Don’t think of AI as an obstacle; treat it as your map.
AI has four lenses.
It’s reshaping the threat, your craft, the environment your people operate in, and the systems you must help secure.
Your job is to bring a human-centered, behavior-first lens to each. So start there.
Does this resonate? Book a call with me and we’ll map your route in 30 minutes. If helpful, I’ll also share how other human risk managers are using AI every day in their jobs.
— Oz A