Don't get caught up in AI FOMO
As the Senior Director of Cyber Resilience at Resilience, I bring a somewhat unconventional perspective to the table. Unlike many in our industry who come from traditional cybersecurity or insurance backgrounds, my expertise lies in decision science. Throughout my career, I’ve been fascinated by one central question: How can we help people make good decisions before they act, not just retrospectively esteem decisions that correspond with desired outcomes? It’s a simple question with profound implications.
This question has never been more relevant than in the advent of generative AI, as organizations rush to implement these AI solutions across their operations. At the moment, we observe two main threats to company value “caused” by AI. One is the obvious external threat, but the other is an internal threat caused by the failure of executive function. The surprising truth I want to share with you is this: hurried adoption of AI might be an extremely dangerous misstep for many organizations, represented by the opportunity costs of wasted time and resources.
How bad actors are leveraging AI
At Resilience, one of our most pressing concerns is the increasingly sophisticated use of AI by threat actors. This isn’t a theoretical future threat—it’s happening now.
For example, we’re seeing more effective deepfakes, both video and audio, that can convincingly impersonate executives or trusted figures. But even more concerning is the evolution of phishing and email attacks. Traditional defenses like spotting misspellings or awkward phrasing are becoming obsolete as AI generates increasingly persuasive communications.
For our clients, this translates to heightened risk in two key areas: ransomware and business email compromise (BEC)/wire fraud. While ransomware often dominates headlines, BEC is potentially just as impactful in aggregate, representing a significant and growing threat vector enhanced by AI capabilities.
How careless actors are wasting AI opportunity
The current AI fervor reminds me of the beginning of the big data analytics movement from a decade ago. Despite the promise and hype, industry reports revealed high failure rates for big data initiatives. The primary reason? Lack of close alignment with business value, objectives, and goals.
We’re seeing similar patterns with AI today, with RAND reporting comparable failure rates for AI initiatives. The critical mistake organizations made with regard to big data initiatives was starting with the data—asking “What can I do with this?”—instead of clarifying and defining the problem—”What are we trying to achieve and why?” We’re seeing the same problem emerge again.
This backward approach inevitably leads to solutions in search of problems, wasting resources and undermining confidence in otherwise promising technologies. Ultimately companies just end up developing interesting science fair projects.
The correct sequence for technology adoption should be: Values (Goals/Objectives) → Choices → Data. Before considering any AI solution, clearly define the problem you’re trying to solve and its business value. Then deal with information and data that can help you make distinctions about the best path forward.
Be comfortable with uncertainty in your business case. Perfect information is rarely available, but estimation methods can provide sufficient confidence to move forward. Focus on ranges and probabilities rather than precise values.
For those looking to deepen their understanding of this approach, I recommend two resources:
- “How to Measure Anything in Cybersecurity Risk” by Richard Seiersen and Douglas Hubbard
- My book, “Business Case Analysis with R” (even if you skip the programming, focus on the thinking process)
Implications for insurance: Assessing cyber risk in a dynamic landscape
Cyber risk assessment differs fundamentally from traditional P&C insurance, in which catastrophic events like hurricanes, fires, or earthquakes follow relatively stable patterns over time. In cyber, the threat landscape evolves continuously as evolving adversaries develop new techniques and vulnerabilities emerge.
At Resilience, we’ve developed a dynamic approach to address this reality. We frequently update our “actuarials”—which are actually sophisticated machine learning models—using hierarchical Bayesian networks. These models are powerful, but they’re only as good as the information that feeds them.
This is where subject matter experts become crucial. Our models incorporate both hard data and expert judgment, creating a more comprehensive risk picture than either could provide alone. This hybrid approach enables us to implement what military strategists call the OODA Loop (Observe, Orient, Decide, Act) in our risk operations center to maintain an edge over adversaries.
But despite the industry buzz, we’re not driven by FOMO to integrate the latest evolving AI systems into our core risk assessment tools. Instead, we rely on cost-effective and powerful classical mathematical models—specifically hierarchical Bayesian networks—that excel with smaller datasets.
These models allow us to incorporate expert judgment and handle uncertainty effectively without requiring massive data infrastructure. That said, we do leverage AI internally for operational efficiency, such as using meeting summary features to improve customer engagement or listening to AI scanning systems that might alert us to emerging threats. We know the capabilities of our methods and we have a clearly refined understanding of how they align to support executives who make security investment decisions.
A parting admonition
If there’s one piece of advice I can leave you with, it’s this: a well-defined problem is more than half solved. Take the time to understand your “why” and the potential value before jumping into AI solutions.
At Resilience, we often say, “We go fast by going slow.” This seeming paradox captures the essence of thoughtful technology adoption. By first taking the time to carefully clarify our objectives, identify meaningful strategic choices to achieve those objectives, and understanding the risk to achieving our objectives caused by key uncertainties, we avoid costly rework and detours, ultimately implementing more effective solutions more quickly. With the right level of discipline and practice, you can do this, too.