Digital Risk: Enterprises Need More Than Cyber Insurance
Threatonomics

A decision scientist’s perspective on AI

by Rob Brown , Sr Director of Cyber Resilience
Published

Don't get caught up in AI FOMO

As the Senior Director of Cyber Resilience at Resilience, I bring a somewhat unconventional perspective to the table. Unlike many in our industry who come from traditional cybersecurity or insurance backgrounds, my expertise lies in decision science. Throughout my career, I’ve been fascinated by one central question: How can we help people make good decisions before they act, not just retrospectively esteem decisions that correspond with desired outcomes? It’s a simple question with profound implications.

This question has never been more relevant than in the advent of generative AI, as organizations rush to implement these AI solutions across their operations. At the moment, we observe two main threats to company value “caused” by AI. One is the obvious external threat, but the other is an internal threat caused by the failure of executive function. The surprising truth I want to share with you is this: hurried adoption of AI might be an extremely dangerous misstep for many organizations, represented by the opportunity costs of wasted time and resources. 

How bad actors are leveraging AI

At Resilience, one of our most pressing concerns is the increasingly sophisticated use of AI by threat actors. This isn’t a theoretical future threat—it’s happening now.

For example, we’re seeing more effective deepfakes, both video and audio, that can convincingly impersonate executives or trusted figures. But even more concerning is the evolution of phishing and email attacks. Traditional defenses like spotting misspellings or awkward phrasing are becoming obsolete as AI generates increasingly persuasive communications.

For our clients, this translates to heightened risk in two key areas: ransomware and business email compromise (BEC)/wire fraud. While ransomware often dominates headlines, BEC is potentially just as impactful in aggregate, representing a significant and growing threat vector enhanced by AI capabilities.

How careless actors are wasting AI opportunity

The current AI fervor reminds me of the beginning of the big data analytics movement from a decade ago. Despite the promise and hype, industry reports revealed high failure rates for big data initiatives. The primary reason? Lack of close alignment with business value, objectives, and goals.

We’re seeing similar patterns with AI today, with RAND reporting comparable failure rates for AI initiatives. The critical mistake organizations made with regard to big data initiatives was starting with the data—asking “What can I do with this?”—instead of clarifying and defining the problem—”What are we trying to achieve and why?” We’re seeing the same problem emerge again.

This backward approach inevitably leads to solutions in search of problems, wasting resources and undermining confidence in otherwise promising technologies. Ultimately companies just end up developing interesting science fair projects.

The correct sequence for technology adoption should be: Values (Goals/Objectives) → Choices → Data. Before considering any AI solution, clearly define the problem you’re trying to solve and its business value. Then deal with information and data that can help you make distinctions about the best path forward.

Be comfortable with uncertainty in your business case. Perfect information is rarely available, but estimation methods can provide sufficient confidence to move forward. Focus on ranges and probabilities rather than precise values.

For those looking to deepen their understanding of this approach, I recommend two resources:

Implications for insurance: Assessing cyber risk in a dynamic landscape

Cyber risk assessment differs fundamentally from traditional P&C insurance, in which catastrophic events like hurricanes, fires, or earthquakes follow relatively stable patterns over time. In cyber, the threat landscape evolves continuously as evolving adversaries develop new techniques and vulnerabilities emerge.

At Resilience, we’ve developed a dynamic approach to address this reality. We frequently update our “actuarials”—which are actually sophisticated machine learning models—using hierarchical Bayesian networks. These models are powerful, but they’re only as good as the information that feeds them.

This is where subject matter experts become crucial. Our models incorporate both hard data and expert judgment, creating a more comprehensive risk picture than either could provide alone. This hybrid approach enables us to implement what military strategists call the OODA Loop (Observe, Orient, Decide, Act) in our risk operations center to maintain an edge over adversaries.

But despite the industry buzz, we’re not driven by FOMO to integrate the latest evolving AI systems into our core risk assessment tools. Instead, we rely on cost-effective and powerful classical mathematical models—specifically hierarchical Bayesian networks—that excel with smaller datasets.

These models allow us to incorporate expert judgment and handle uncertainty effectively without requiring massive data infrastructure. That said, we do leverage AI internally for operational efficiency, such as using meeting summary features to improve customer engagement or listening to AI scanning systems that might alert us to emerging threats. We know the capabilities of our methods and we have a clearly refined understanding of how they align to support executives who make security investment decisions.

A parting admonition

If there’s one piece of advice I can leave you with, it’s this: a well-defined problem is more than half solved. Take the time to understand your “why” and the potential value before jumping into AI solutions.

At Resilience, we often say, “We go fast by going slow.” This seeming paradox captures the essence of thoughtful technology adoption. By first taking the time to carefully clarify our objectives, identify meaningful strategic choices to achieve those objectives, and understanding the risk to achieving our objectives caused by key uncertainties, we avoid costly rework and detours, ultimately implementing more effective solutions more quickly. With the right level of discipline and practice, you can do this, too.

You might also like

How Scattered Spider’s vertical-focused strategy creates industry-wide security emergencies

This post is based on a threat intelligence report by Resilience Director of Threat Intelligence Andrew Bayers. Scattered Spider has emerged as a sophisticated threat actor whose advanced social engineering tactics blur the lines between common cybercrime and nation-state tradecraft. Their tendency to tackle specific verticals at a time – as they did in the […]

The essential guide to cyber incident response leadership and decision making

When 43% of UK businesses report experiencing a cyber breach or attack in just the past year, the question isn’t whether your organization will face a cyber incident—it’s how well you’ll respond when it happens.  This stark reality was at the center of a recent webinar hosted by Resilience, featuring insights from Scott Tenenbaum, Head […]

Navigating the growing personal liability facing CISOs

Let’s not mince words: The threat of personal liability and potential criminal charges for CISOs has become a legitimate concern. At a recent “CISOs Off the Record” panel hosted by Resilience at the 2025 RSA Conference, three experienced CISOs talked about the growing trend of CISOs being found personally liable for actions they take at […]

Does the proposed UK ransomware payment ban take things too far?

Cowritten with Henry Westwood, Resilience Cyber Underwriting Manager Simon West, Resilience Head of Customer Engagement The UK government recently launched a consultation on legislative proposals to combat ransomware attacks, one of the most significant cyber threats facing organisations today. As cybersecurity professionals working with organisations across various sectors, we’ve carefully examined these proposals and offered […]

North Korea is targeting the job interview process to infiltrate US companies

This post is based on threat intelligence compiled by Resilience Intelligence Analyst Steph Barnes, published May 8, 2025. North Korean hackers have turned the interview chair into a staging ground for cyberattacks. Two sophisticated campaigns—Contagious Interview and WageMole—are actively targeting job seekers and employers alike, with a clear endgame: funneling money back to the North […]

Scattered Spider strikes again in recent UK retail attacks

In the past two weeks, the UK retail industry has faced an unprecedented wave of sophisticated cyberattacks, exposing critical vulnerabilities across the sector. The high-profile breaches at Marks & Spencer, Harrods, and others have sent shockwaves through the industry, with M&S alone suffering an estimated £3.8 million in lost online sales per day and seeing […]