Digital Risk: Enterprises Need More Than Cyber Insurance
Threatonomics

What DeepSeek means for cyber risk

by Emma McGowan , Senior Writer
Published

China's new LLM has everyone scrambling

The January 20 release of DeepSeek, an open source LLM developed by a Chinese research lab, rocked both the tech world and the financial markets. The product quickly demonstrated what appears to be exponentially better energy, cost efficiency, and similar performance capabilities when compared with American-made AI products like OpenAI. It also highlighted a number of cyber risks, ranging from the use of shadow AI to national security concerns for the United States. 

“The sheer innovation behind DeepSeek’s ability to minimize GPU use and produce faster, more technical responses at a lower cost vastly disrupted the AI landscape, leaving us to wonder what other developments China has made,” Maria Long, Deputy Chief Underwriting Officer at Resilience says. “This poses concerns about the US keeping the upper hand and maintaining the ability to respond defensively to AI-driven attacks.”

DeepSeek’s emergence is not just a technological milestone—it’s a paradigm shift that forces both businesses and governments to rethink their approach to AI risk. While its efficiency and performance gains are undeniable, the model’s rapid adoption raises pressing concerns about data security, regulatory oversight, and geopolitical implications. To understand the full impact, we must examine how DeepSeek is reshaping the risk landscape at both the enterprise and national security levels.

How DeepSeek changes the risk landscape

The rise of DeepSeek is sending ripples across both business and national security landscapes, reigniting concerns about AI governance, data security, and geopolitical competition. As organizations and governments grapple with these concerns, the question remains: how do we harness AI’s potential without compromising security?

Potential business risks

While most businesses won’t face entirely new risks from DeepSeek, its rapid adoption underscores the ongoing challenge of managing AI use within organizations—particularly when it comes to shadow IT and data exposure.

“In many settings these tools are surfacing as shadow IT, where employees are using the technology for day-to-day tasks, unbeknownst to the company,” Long says. “This can inadvertently expose sensitive information by querying or prompting the LLM with confidential material, including intellectual property.”

The widespread popularity of chat-based LLMs like ChatGPT and now DeepSeek means most organizations can assume that at least a portion of their employees are using these tools. In order to reduce related cyber risk, organizations need to adopt acceptable use policies while simultaneously providing employee training around the risks of AI tools. 

Finally, the recent suspected DDOS attack on DeepSeek underscores the fragility of some of these models. “If businesses use AI to improve up-time and overall business performance, they must have redundancies in place as a fail-over to ensure business continuity should the AI fail or not perform as expected,” Long says.

Potential national security risks

While the business risks are similar to those of other AI products, DeepSeek does present some new and serious national security concerns. In fact, the US Navy has already banned members from using it “for any work-related tasks or personal use,” citing “potential security and ethical concerns.” And as the recent TikTok ban spotlights, the US government is actively concerned about the Chinese government having access to vast droves of data from US citizens. 

“Many US perspectives dub DeepSeek as a national security threat,” Long says. “There are concerns around how data entered into the LLM for prompting and queries may be used, and the perceivably high potential of surveillance from the CCP.”

Mike McNerney, SVP for Security at Resilience and board member of VetsinTech, points out that “the US had no interest in a world telecoms dominated by Huawei and should have similar concerns about DeepSeek becoming the go-to standard for open-source AI.”

“The US is undeniably locked in an AI race with China—one that’s poised to shape both our economies and future battlefields,” McNerney, continues. “This breakthrough by DeepSeek is a likely example of incredible Chinese intelligence capabilities and shows just how seriously they want to win this race.” 

Steps to reduce cyber risk exposure from DeepSeek

While DeepSeek and similar tools can boost productivity for organizations, they also provide advantages to threat actors. This dual-use nature of AI is likely to drive a rise in both the frequency and impact of cyber threats, as AI amplifies the effectiveness of existing attack methods.

Without proper oversight, organizations risk data leaks, regulatory violations, and AI-driven cyber threats. To stay ahead, companies must take proactive steps to govern AI use, educate employees, and build safeguards against emerging risks. Here’s how:

1. Engage leadership for AI governance and stakeholder education

AI governance starts at the top. Simon West, Director of Customer Engagement at Resilience, recommends securing leadership buy-in to establish clear policies that define acceptable AI use across the organization. Educate key stakeholders—including executives, IT teams, and legal departments—on the risks of shadow AI. Establishing governance now ensures alignment on risk tolerance and responsible AI adoption.

2. Provide employee training to prevent data leaks

AI tools like DeepSeek can inadvertently expose sensitive data if employees use them without proper safeguards, West says. Implement company-wide training programs to help employees understand:

  • The risks of inputting confidential or proprietary information into AI tools.
  • The potential for AI-generated responses to be misleading, biased, or manipulated by adversaries.
  • Best practices for secure AI use, including encryption and data access controls.

3. Develop a governance model with accountability measures

A strong governance framework ensures AI is used ethically, securely, and in compliance with regulations. Define clear roles and responsibilities for AI oversight, including:

  • Risk assessment teams to evaluate AI security vulnerabilities.
  • Compliance officers to monitor adherence to legal and industry standards.
  • Incident response protocols for handling AI-related breaches or failures.

4. Implement redundancies for business continuity

AI failures—whether due to malfunctions, adversarial attacks, or unexpected performance issues—can disrupt critical operations. To minimize business impact:

  • Establish fail-over systems that ensure continuity if AI tools become unavailable.
  • Maintain alternative workflows that allow employees to operate without full reliance on AI.
  • Regularly test backup systems to validate their effectiveness in crisis scenarios.

5. Monitor and adapt to evolving AI-driven threats

AI is accelerating both cyber defense and cyber offense. Businesses should stay ahead of threats by:

  • Tracking attack patterns to assess how AI is being used to enhance cyberattacks.
  • Strengthening endpoint security to detect AI-generated threats like deepfake phishing.
  • Engaging with industry groups and government agencies to stay informed on AI risk mitigation best practices.

6. Procure insurance coverage

Businesses should consider both Technology Errors & Omissions (Tech E&O) insurance and Cyber Insurance policies to cover AI-related incidents.

Here’s a breakdown of the coverage each type of policy provides:

Technology Errors & Omissions (Tech E&O) Insurance:

  • This type of insurance covers financial losses that occur due to errors, omissions, or failures in technology services or products.
  • It provides protection against claims of negligence, software failures, or unintentional security gaps in AI deployments.

Cyber Insurance:

  • This insurance covers financial and operational losses resulting from cyber incidents. These incidents may include data breaches, ransomware attacks, and regulatory fines.
  • Cyber insurance may also include coverage for business interruption, legal expenses, and incident response services.

To ensure adequate coverage, businesses should take the following steps:

  • Engage with your broker or insurance provider: Review current Tech E&O and Cyber Insurance policies to understand what is covered. Inquire whether the policy includes AI-related risks, third-party liability, and regulatory fines.
  • Evaluate policy exclusions and limitations: Ensure that coverage extends to breaches or failures related to AI-driven applications and identify any exclusions that could leave gaps in protection.
  • Enhance coverage if necessary: If the existing policy does not adequately cover AI-related risks, discuss potential endorsements or policy adjustments with your broker. Consider additional cybersecurity insurance options that align with the business’ AI usage.

As DeepSeek reshapes the AI landscape, its impact extends beyond technological advancements to fundamental questions about security, governance, and resilience. Businesses must now navigate a rapidly evolving risk environment—one where AI-driven efficiencies come hand in hand with heightened cyber threats and regulatory scrutiny. 

Whether addressing the challenges of shadow IT, ensuring business continuity, or safeguarding against national security concerns, organizations must take proactive steps to mitigate risk while leveraging AI’s benefits. The key question is no longer if AI like DeepSeek will be integrated into business operations, but how to do so securely and responsibly.

You might also like

What enterprises over $10 billion need to know about managing cyber risk

The role of the Chief Information Security Officer has undergone a profound transformation from a purely technical role to a strategic business one in recent years. For CISOs operating in organizations with over $10 billion in revenue—a segment that Resilience has recently expanded its cyber risk solutions to serve—the shift comes with unique pressures and […]

How to create an effective Incident Response Plan

Cyberattacks are no longer a distant threat—they are a certainty. Whether it’s a ransomware attack, data breach, or insider threat, organizations must be prepared to respond quickly and effectively. Without a solid plan in place, even a minor security incident can spiral into a major crisis, leading to financial losses, reputational damage, and regulatory penalties. […]

Understanding the ClickFix attack

Imagine a cyberattack so simple yet so deceptive that all it takes is three keystrokes to compromise your system. This is the reality of the ClickFix attack, a threat that Resilience threat researchers have observed in the wild since 2024 and that seems to be ramping up in recent weeks. ClickFix cleverly manipulates users into […]

How MFA can be hacked

Multi-factor authentication (MFA) represents a significant improvement over single-factor authentication, adding an extra layer of security that has become standard practice across industries. It’s become so popular that many organizations and individuals believe implementing MFA makes their accounts nearly impenetrable to attackers. After all, even if someone steals your password, they would still need access […]

What is the ROC?

The cybersecurity industry thrives on headlines. A major software vulnerability, a ransomware attack, or a widespread outage—each event sends ripples of concern through the digital ecosystem, often accompanied by a rush to assign blame and predict catastrophic consequences.  However, the reality of cyber risk is far more nuanced than these attention-grabbing headlines suggest. The key […]

Quantifying cyber risk for strategic business alignment

In Resilience’s recent webinar, “Quantifying Cyber Risk for Strategic Business Alignment,” (which I hosted along with my colleagues Eric Woelfel, Senior Cybersecurity Engineer, and Erica Leise, Senior Security Engineer) we wanted to tackle a common—and often limiting—mindset in cybersecurity. It’s a mindset I’ve seen again and again in my decade and half building machine learning […]