Digital Risk: Enterprises Need More Than Cyber Insurance
Threatonomics

A decision scientist’s perspective on AI

by Rob Brown , Sr Director of Cyber Resilience
Published

Don't get caught up in AI FOMO

As the Senior Director of Cyber Resilience at Resilience, I bring a somewhat unconventional perspective to the table. Unlike many in our industry who come from traditional cybersecurity or insurance backgrounds, my expertise lies in decision science. Throughout my career, I’ve been fascinated by one central question: How can we help people make good decisions before they act, not just retrospectively esteem decisions that correspond with desired outcomes? It’s a simple question with profound implications.

This question has never been more relevant than in the advent of generative AI, as organizations rush to implement these AI solutions across their operations. At the moment, we observe two main threats to company value “caused” by AI. One is the obvious external threat, but the other is an internal threat caused by the failure of executive function. The surprising truth I want to share with you is this: hurried adoption of AI might be an extremely dangerous misstep for many organizations, represented by the opportunity costs of wasted time and resources. 

How bad actors are leveraging AI

At Resilience, one of our most pressing concerns is the increasingly sophisticated use of AI by threat actors. This isn’t a theoretical future threat—it’s happening now.

For example, we’re seeing more effective deepfakes, both video and audio, that can convincingly impersonate executives or trusted figures. But even more concerning is the evolution of phishing and email attacks. Traditional defenses like spotting misspellings or awkward phrasing are becoming obsolete as AI generates increasingly persuasive communications.

For our clients, this translates to heightened risk in two key areas: ransomware and business email compromise (BEC)/wire fraud. While ransomware often dominates headlines, BEC is potentially just as impactful in aggregate, representing a significant and growing threat vector enhanced by AI capabilities.

How careless actors are wasting AI opportunity

The current AI fervor reminds me of the beginning of the big data analytics movement from a decade ago. Despite the promise and hype, industry reports revealed high failure rates for big data initiatives. The primary reason? Lack of close alignment with business value, objectives, and goals.

We’re seeing similar patterns with AI today, with RAND reporting comparable failure rates for AI initiatives. The critical mistake organizations made with regard to big data initiatives was starting with the data—asking “What can I do with this?”—instead of clarifying and defining the problem—”What are we trying to achieve and why?” We’re seeing the same problem emerge again.

This backward approach inevitably leads to solutions in search of problems, wasting resources and undermining confidence in otherwise promising technologies. Ultimately companies just end up developing interesting science fair projects.

The correct sequence for technology adoption should be: Values (Goals/Objectives) → Choices → Data. Before considering any AI solution, clearly define the problem you’re trying to solve and its business value. Then deal with information and data that can help you make distinctions about the best path forward.

Be comfortable with uncertainty in your business case. Perfect information is rarely available, but estimation methods can provide sufficient confidence to move forward. Focus on ranges and probabilities rather than precise values.

For those looking to deepen their understanding of this approach, I recommend two resources:

Implications for insurance: Assessing cyber risk in a dynamic landscape

Cyber risk assessment differs fundamentally from traditional P&C insurance, in which catastrophic events like hurricanes, fires, or earthquakes follow relatively stable patterns over time. In cyber, the threat landscape evolves continuously as evolving adversaries develop new techniques and vulnerabilities emerge.

At Resilience, we’ve developed a dynamic approach to address this reality. We frequently update our “actuarials”—which are actually sophisticated machine learning models—using hierarchical Bayesian networks. These models are powerful, but they’re only as good as the information that feeds them.

This is where subject matter experts become crucial. Our models incorporate both hard data and expert judgment, creating a more comprehensive risk picture than either could provide alone. This hybrid approach enables us to implement what military strategists call the OODA Loop (Observe, Orient, Decide, Act) in our risk operations center to maintain an edge over adversaries.

But despite the industry buzz, we’re not driven by FOMO to integrate the latest evolving AI systems into our core risk assessment tools. Instead, we rely on cost-effective and powerful classical mathematical models—specifically hierarchical Bayesian networks—that excel with smaller datasets.

These models allow us to incorporate expert judgment and handle uncertainty effectively without requiring massive data infrastructure. That said, we do leverage AI internally for operational efficiency, such as using meeting summary features to improve customer engagement or listening to AI scanning systems that might alert us to emerging threats. We know the capabilities of our methods and we have a clearly refined understanding of how they align to support executives who make security investment decisions.

A parting admonition

If there’s one piece of advice I can leave you with, it’s this: a well-defined problem is more than half solved. Take the time to understand your “why” and the potential value before jumping into AI solutions.

At Resilience, we often say, “We go fast by going slow.” This seeming paradox captures the essence of thoughtful technology adoption. By first taking the time to carefully clarify our objectives, identify meaningful strategic choices to achieve those objectives, and understanding the risk to achieving our objectives caused by key uncertainties, we avoid costly rework and detours, ultimately implementing more effective solutions more quickly. With the right level of discipline and practice, you can do this, too.

You might also like

What enterprises over $10 billion need to know about managing cyber risk

The role of the Chief Information Security Officer has undergone a profound transformation from a purely technical role to a strategic business one in recent years. For CISOs operating in organizations with over $10 billion in revenue—a segment that Resilience has recently expanded its cyber risk solutions to serve—the shift comes with unique pressures and […]

How to create an effective Incident Response Plan

Cyberattacks are no longer a distant threat—they are a certainty. Whether it’s a ransomware attack, data breach, or insider threat, organizations must be prepared to respond quickly and effectively. Without a solid plan in place, even a minor security incident can spiral into a major crisis, leading to financial losses, reputational damage, and regulatory penalties. […]

Understanding the ClickFix attack

Imagine a cyberattack so simple yet so deceptive that all it takes is three keystrokes to compromise your system. This is the reality of the ClickFix attack, a threat that Resilience threat researchers have observed in the wild since 2024 and that seems to be ramping up in recent weeks. ClickFix cleverly manipulates users into […]

How MFA can be hacked

Multi-factor authentication (MFA) represents a significant improvement over single-factor authentication, adding an extra layer of security that has become standard practice across industries. It’s become so popular that many organizations and individuals believe implementing MFA makes their accounts nearly impenetrable to attackers. After all, even if someone steals your password, they would still need access […]

What is the ROC?

The cybersecurity industry thrives on headlines. A major software vulnerability, a ransomware attack, or a widespread outage—each event sends ripples of concern through the digital ecosystem, often accompanied by a rush to assign blame and predict catastrophic consequences.  However, the reality of cyber risk is far more nuanced than these attention-grabbing headlines suggest. The key […]

Quantifying cyber risk for strategic business alignment

In Resilience’s recent webinar, “Quantifying Cyber Risk for Strategic Business Alignment,” (which I hosted along with my colleagues Eric Woelfel, Senior Cybersecurity Engineer, and Erica Leise, Senior Security Engineer) we wanted to tackle a common—and often limiting—mindset in cybersecurity. It’s a mindset I’ve seen again and again in my decade and half building machine learning […]