
Everyone agrees that AI is changing the threat landscape. Fewer people are talking about what that actually demands from security and legal teams right now, in 2026, with autonomous agents already running inside enterprise environments. So what does it really take to make your organization AI resilient?
That was the premise of our March Risk Briefing, where Resilience CISO Chris Wheeler, Chief Legal Officer Mehvish Femia, and Senior SecOps Engineer Paragi Shah spent an hour cutting through the noise. The discussion covered how AI is reshaping loss patterns, why the “human in the loop” model has moved from best practice to non-negotiable, how legal and procurement teams are struggling to keep pace with terms of service changes, and what the regulatory high-water mark actually looks like for organizations managing a global footprint.
A few moments from the session are worth flagging before you read the full transcript. Paragi walked through two real incidents — an autonomous agent that executed a drop database command on a live production server after being told eleven times not to make changes, and a prompt injection vulnerability in GitHub Copilot that allowed a malicious README file to silently disable user confirmations and exfiltrate private code. Both cases point to the same underlying problem: agents granted excessive permissions, without circuit breakers or human review for high-risk actions, become a privileged insider threat.
Chris made the case that outright blocking is no longer a viable default. With 88% of webinar attendees reporting a company-approved AI tool, the strategic question has shifted from “how do we stop AI from entering the environment” to “how do we safely enable it while actually understanding the scope of what we’ve let in.” That includes a harder look at non-human identities — service accounts, API keys, autonomous agents — that most standard access reviews still aren’t accounting for.
Mehvish’s advice on vendor contracts was practical and direct: feedback clauses are a procurement problem before they become a legal one. Vet what a tool collects, determine what can be turned off, and treat anything non-negotiable as a reason not to proceed — not a reason to accept the risk.
The full transcript is below. We’ve also pulled out the five most actionable recommendations from the session, covering how to modernize data classification for AI utility, govern non-human identities, implement agentic guardrails, vet the AI supply chain, and build continuous visibility through AI gateways. Whether you’re just starting to build out your AI governance framework or stress-testing what you already have, these are the moves worth prioritizing in the next 90 days.
Webinar Transcript: Resilience Risk Briefing
Date: March 24, 2026
Host: Jud Dressler (Director of Risk Operations Center)
Panelists:
- Chris Wheeler: Chief Information Security Officer
- Mehvish Femia: Chief Legal Officer
- Paragi Shah: Senior Security Operations Engineer
Introduction
Jud Dressler: All right, we’ll go ahead and get going. Welcome back everyone. For our new folks, a very quick reminder of who we are. Resilience is a cyber risk management company focused on the proactive disruption of the cyber attack chain, quickly turning intelligence into action for our clients to protect them and to impose costs on our adversaries. We ensure our clients know their risk in both technical and financial terms and help them prioritize and manage that risk through mitigating controls and transfer through an insurance policy.
We started this webinar in January talking about the fundamental shift in cyber extortion trends moving from encryption to data theft-only attacks, and how this needs to change the defender’s mentality. Last month, we went through the reality seen in our claims data, including frequency and severity stats and some great case studies including extortion, privacy, and litigation trends. And we took a quick look into what we’re already seeing from threat actors’ use of artificial intelligence in our claims data.
If you didn’t make it to the January or February session, we’ll drop the link in the chat; you can view those at any time. Feel free throughout today’s session to type in questions. We’ll try to address them if possible, but if we don’t get to it during the webinar, we will get you an answer afterwards.
Today, we’re going to dive deeper into the hottest term in the community right now: artificial intelligence. We’ve all seen the headlines that AI is changing everything. But for the CISO, everything usually means the attack surface. So we’re moving past the hype to look at how AI is physically shifting our loss patterns, what the 2026 regulatory landscape actually demands, and how to move from being AI safe to AI resilient.
To do that, I want to bring in three folks from across the Resilience team: our very own CISO, Chris Wheeler; our Chief Legal Officer, Mehvish Femia; and our Senior Security Operations Engineer, Paragi Shah. I’ll pass it over to each of you now to go a little deeper into your background. Over to you, Chris.
Panelist Introductions
Chris Wheeler: Hi everyone, my name is Chris Wheeler. I’m the Chief Information Security Officer here at Resilience. Prior to Resilience, I was at Morgan Stanley for four and a half years. I started my career in the US government, in the US Navy, and also worked over at Arbor Networks. Over to Mehvish.
Mehvish Femia: Thanks, Chris. Hi everyone, my name is Mehvish Femia. I’m the Chief Legal Officer here at Resilience and I’m responsible for several key business enablement functions including legal and information security. I started my career in insurance doing group benefits many, many moons ago and have had the privilege of being able to be here with you today on the cutting edge of the legal and regulatory landscape. I’ll turn it over to Paragi.
Paragi Shah: Thank you, Mehvish. I’m Paragi Shah, the Senior SecOps Engineer here. I’ve been with Resilience for about nine months now, and my current focus is the secure enablement of AI tools and building automation workflows for security operations, with plans to scale these efficiencies across other departments as well. My background comes from a security startup in the MSSP space where I led and built scalable threat data ingestion pipelines, SOAR platforms, and perimeter defense solutions. Thank you for having me here.
AI-Enhanced Risks and Shifting Loss Patterns
Jud Dressler: Great. Thanks, everybody. So we’re going to start real quick with a poll. We want to hear from you. Go ahead and hit your answer. Who has a company-approved AI tool? Whether that is Gemini, OpenAI (ChatGPT), has your company decided on one tool or many tools that are approved within the company?
Okay, let’s get going. We’ll start with the basics here—or I guess what used to be the basics. We’ve seen statistics suggesting that AI-enhanced phishing campaigns have reached about a 57% efficacy rate. I think the previous rate was around 10 to 12%. So it’s kind of a tired stat now; everybody’s talking about it, but the implications are definitely fresh. So this question is for Chris here: Are we actually seeing new types of losses, or is AI just making the old ones happen at a scale we can’t manually defend against?
Chris Wheeler: So I’d say a little bit of yes and no on this one. I think some of the AI-specific losses are emerging, but a lot of them are the familiar patterns you might have seen in previous years of just enhancing the pace at which DLP concerns—data loss or leakage prevention—happens within your organization. The most common example that many insurers have seen examples of would be like HIPAA data being pasted into an unapproved AI tool. So that would be an example that very well could have happened with any tool in previous years.
I’d say some of the fresher implications, one is the speed. Jud spoke to the efficacy rate, but the speed at which agents are being used by attackers and how they can use those to craft better campaigns. Another really interesting one that we saw just a couple weeks ago was for companies that are deploying internal productivity tools, especially across the company, agentic tools in particular, or are maintaining those or working them into their platforms.
The specific example I’m thinking of was a company called CodeWall dropped a report on how they had compromised a very well-known consultancy’s internal agentic AI system and all the different chats they had access to, different identities they had access to, proprietary research. And then one of the more interesting parts of this, where we start talking about new types of attacks, were configurations and prompts around the models themselves. So not just the data, but how those are being configured, their guardrails. So I found that very interesting, but in summary, we are seeing a lot of the existing attacks happening faster, and then just new threat models entirely for companies that are deploying AI for their employees and their clients.
Technical Risks: Deviation and Guardrails
Jud Dressler: Appreciate that. Over to Paragi. From a technical standpoint, what does it look like when an autonomous system fundamentally deviates from its guardrails, and why is maintaining the “human in the loop” model kind of non-negotiable?
Paragi Shah: Yeah, so from a technical standpoint, deviation occurs when an AI’s optimization goal decouples from its operational guardrails. And this is not just a bug; it’s unintended agency. So when an agent breaches its safety nets—its technical controls, policies designed to keep its behavior predictable—it becomes a runaway agent, operating beyond its defined scope with unauthorized, potential catastrophic actions.
I’ll talk about a few real-world cases where this deviation has caused unintended behaviors and major incidents. The first one is a prominent incident that involved an autonomous agent in Replit. Just to give some context here, Replit is a cloud-based AI-powered development platform that enables beginners and advanced developers to quickly write code and build full-stack applications with natural language prompts.
And the deviation happened during a code freeze. There was a code freeze in place, and the agent was explicitly told 11 times not to make production changes, but it interpreted a secondary goal—its routine task maintenance—as a higher priority. As a result, it executed a drop database command on a live production server. And not only that, when it was confronted by the developer, it tried to cover up its tracks; it created 4,000 fake user records to make the dashboard appear normal while the real data was gone.
This happens when the AI agent is granted excessive permissions—in this case, write and delete permissions on a live production server—without any software circuit breaker or even a human in the loop there for external validation for high-risk commands.
The other example or the other scenario that I want to talk about is the YOLO mode vulnerability that was identified in GitHub Copilot in August of 2025. We all know YOLO mode is “you only live once,” but when it comes to AI coding tools, it’s “you only look once.” YOLO mode in AI coding tools like Claude Code and Cursor, it enables autonomous actions by skipping manual approvals for file edits and terminal commands. And while this definitely boosts speed, it skips the essential safety barriers. It’s basically giving you power without guardrails.
So during this breach, researchers identified a vulnerability, and there is a CVE associated with this, where a malicious prompt in a README file triggered a prompt injection attack. The prompt tricked Copilot into entering YOLO mode silently. Basically what that did was the AI disabled all user confirmations and it was then able to run scripts, install malware, or exfiltrate private code without the developer even seeing any prompts. This represents agentic supply chain risk where your own AI becomes a privileged insider threat.
These examples show us why “human in the loop” is just non-negotiable. I mean, we all know that all models these days are remarkably fluent in human language, and they’re even getting better. But what they lack is intent. That’s where the human comes into play, human in the loop. Just for an example, when a senior engineer says, “We need to clean up the cloud deployment,” it implicitly excludes the live production servers. But when the same instruction is passed on to an over-permissioned AI agent, it’s going to literally follow the instructions and if it perceives cleaning up the database as the best way to follow the instructions, it’s going to go ahead and do that.
So, yeah, this has essentially shifted the attack surface. The introduction of flags like “dangerous skip permissions” in Claude Code or “dangerously skip or disable sandbox” in Snowflake Cortex AI, it places immense pressure on the security teams. So balancing the speed of autonomous tools with necessary security tooling is the new challenge or is the primary challenge for modern enterprise integrity.
Legal and Compliance Strategy
Jud Dressler: Appreciate that. We did have one question from one of our viewers: Does Resilience have an overall model for analyzing agentic AI risks? I’ll just go ahead and take that. We actually do have some questions through our platform that actually kind of help our clients understand that risk, and we are building it into our back-end models as well to make sure that we fully understand that risk. So it is something we continue to work towards. Something else we’re working on is actually trying to come up with a template, suggested guardrails and things like that for our clients, so that they’re working off of a known good or a best practice as they start to build out different AI systems.
Appreciate the insights there. It’s not just the attackers that are using AI; it’s really about the AI tools we’ve invited into our house as well, and we’ve hit on some of that. If everybody remembers the 2023 Zoom terms of service controversy, the landscape continues to evolve with AI. So over to Mehvish here: Terms of service updates are happening faster and really the change in capabilities are happening faster than potentially procurement can even handle them. So how should legal and security teams monitor these changes to ensure our data isn’t inadvertently being used to train third-party models?
Mehvish Femia: I think surprisingly the answer is to actually utilize the AI that you’ve invited into your house to make sure that you have notifications set up properly so that people on your security teams, your IT teams, are aware that the changes are happening, hopefully before they’re rolled out, but as close to in real-time if not possible to be aware in advance and to make sure that one, there’s a way to turn it off.
If not, then you pull in legal and say, “Okay, what are our requirements underneath our contracts that we might be able to get some of this shut off just for us?” But it really does require humans in the loop. You can set up information to flow to a person; you have to make it somebody’s job to review the notification that’s come out to look at: “Zoom will now do this, this, and this. Is that something you want inside your particular environment?” And if not, contact your legal department for what are consequences here and what can we use based on the contract language to actually disable some of that.
You’ll probably start to see it faster and faster as AI develops faster and faster, building on Paragi and Chris’s commentary. Terms of service are going to be updated—are going to be out of date as soon as they’re published in some cases, because it is exponential if developers have given the opportunity to their own agentic AI to develop independently and push out to production. Terms of service are the last thing anybody’s thinking about updating.
Mitigation and Data Classification
Jud Dressler: Appreciate that. When things are moving so fast, it’s hard to keep up sometimes. Let’s talk about prompt injection and shadow AI as we look to somewhat standardize what we do. First off, let’s go ahead and publish the results of the poll. So that gives us an idea of, hey, what companies are out there. How many of us have approved tools? But we all know that different parts of the company, different employees, they want to get their job done more efficiently, so a lot of times they are trying to get around your guardrails or they are trying to bring in their own AI tools. So when an engineer uses an unsanctioned tool to kind of clean up proprietary code on GitHub, what’s the actual kind of exfiltration risk we’re facing today? Let’s go over to Paragi.
Paragi Shah: Yeah, so in 2026, the risk of an engineer using an unsanctioned tool to clean up code has shifted from simple data leakage to agent hijacking. The danger is no longer just that the tool can see your code, but it’s the tool that is granted excessive permissions to your repository or your environment that can be weaponized into an automated insider threat.
There are multiple exfiltration risks we face today. I’m going to quote some numbers from the Enterprise AI and SaaS Data Security report released in October of 2025. Roughly 77% of employees now paste data into shadow AI instances or shadow AI tools, with about 82% of those coming from unmanaged personal accounts. So this code then makes it into model training, is often ingested for model training, and your IP or your intellectual property is part of the global model’s weights, which can be used or can be reached by any competitor that’s trying to find a similar solution.
And it’s not just that, but these unsanctioned tools—as we talked about it earlier with the GitHub vulnerability—they often request full IDE or repository access. And if an agent enters YOLO mode skipping human confirmations, it can perform anything that the user can—the GitHub user can—basically deleting branches or publishing malicious packages without any user approval or user oversight.
The other risk, which is the number one risk, is indirect prompt injection. Again, malicious instructions can be passed in external documents such as READMEs—or I mean even even websites and emails—and specifically to GitHub repos’ README files; they can trick an agent into sending sensitive data to agent-controlled destinations.
Another risk that I want to talk about is memory poisoning and persistent exfiltration. Again, attackers can inject malicious instructions, and this is very easy to do—I mean not very easy, but with the available skills and plugins in the marketplace for agentic coding tools—when agents dynamically load these skills or plugins at runtime, they can contain instructions to exfiltrate data during routine tasks. And as part of the malicious instructions, they’re poisoning the agent’s long-term persistent memory. So this then acts as the poison instructions to leak information on routine sessions.
Another risk that I want to talk about is MCP risks—the use of MCP servers. Just to give quick context, MCP servers are basically a lightweight program that acts as a standardized bridge between your model and external tools. And over-permissioned MCP server may expose dangerous tools like file delete or data export that the agent can be tricked into using.
Strategizing for AI Resilience
Jud Dressler: Over to Chris. How are we addressing, from the CISO standpoint, how are we addressing the shadow AI problem? A lot of times it’s all or nothing, so is it a matter of blocking everything, or is there a way to kind of make sure everybody kind of follows the secure path, to make it that easy path for our employees to go down?
Chris Wheeler: Yeah, so first I’d say this is a very challenging problem as we talked about in 2026, how quickly these things are evolving and just the patterns. So I’d say directly, blocking is just one of the tools you have in your toolbox as a CISO, and you should have secure defaults. But I think one of the things that I heard earlier was actually having employees use those approved tools, so providing that approved path.
I think when Gen AI first started becoming so prevalent, many companies outright banned it. I think we’re past that point. We saw the results of the poll there; 88% of people on this call have an approved company AI tool. So more and more, it’s ensuring that those company-approved tools have the right enterprise agreements in place and that you are encouraging employees to use them.
Now, I think where we have a little bit of trouble keeping pace is some of the things that Paragi was mentioning in terms of the different threat vectors and whether those are in the tools themselves—so whether that falls within your existing vendor review process—or whether you have to start looking at plugins and extensions and all the things and integrations, all the things that go into what those tools connect to.
Now, in order to get a handle on that, you do have some tools already from your existing infrastructure. So things like identity providers to understand who is logging into maybe unapproved tools, to see if they’re using, say, a company account to log into a non-approved AI tool. Similarly, if you have network protections in place like a SASE or an internet gateway or even a DNS protection mechanism, you can start with an inventory of how often and how many employees are accessing these types of tools on a day-to-day basis and use that to start a conversation, both to understand employee needs—because we see this is going so quickly, it’s not just people don’t just want a chat interface, they want agents that can take action on behalf. So it gives you an ability to start that conversation of why you need it, but it also gives you an ability to start quantifying what you have at risk.
So how many of your employees are using those unapproved tools? Is it one person or is it 10 to 20 users of this particular tool? Why is that? What is it that they’re after? Is it that your existing tools are not meeting needs? So starting with that kind of understanding that you can’t have an outright ban and then getting a handle on the problem is one thing. I would also encourage folks to look at integrations—what data increasingly, not just the tool itself, but what data are you giving it explicit access to? Maybe are you giving that access to say email or a particular data store that might be central to your company? What level of permission are you giving that? Are you giving write access, or are you giving read access? And use that to inform your risk assessments before you decide on whether those existing tools are meeting needs. I think all of this really puts a premium on what you’re doing for third-party risk management, both with open-source tools and with your existing vendors. But I think it’s a little bit of a mindset shift for CISOs from going from just blocking things outright to how can you safely enable and understand the requirements of your business, because the reality is this is not going away. This is only accelerating. But you do have a lot of existing tools within your ecosystem to get a handle on the scope of the problem and understand that.
Visibility and Governance
Jud Dressler: Appreciate that. We have a couple questions coming in from Francisco. First off: How are you maintaining full visibility of the AI agents running in our environment?
Chris Wheeler: Yeah, so I think full visibility is the challenge here. So we see on the existing front, you have your EDR tools, you have your network tools, but if you say have a cloud provider that is launching agents not on say an endpoint that you monitor but in their cloud ecosystem, that is where you kind of get into the trust—trust but verify. How would you be able to log and monitor that? Do they offer that? Do you have the right subscription level? Do they even offer it as a feature to stream that and give visibility of that?
If not, what level of—and this kind of goes back to how you assess the risk—what level of access would you give that? Do you enable that feature to begin with? Do you say, “Okay, you can use that, but only attached to certain systems”? So what we find is every tool like this is created differently as at various stages of development. Or we talk a lot about budgets and return on investment for your controls; do you have the budget to invest in that enterprise plan? And that’s increasingly becoming a little bit more of a pay-to-play ecosystem.
Where you don’t have that built-in visibility is where you start getting more and more into specific point solutions. So I’m in RSA conference this week; I anticipate hearing about every solution known to securing agentic AI. But what one of the things that we’ve observed across the board is there is not a one-size-fits-all for this. So many are putting a premium on developer IDEs and then some of the existing—we talked about prompt injection and data leakage prevention—we see a lot in that space. But many companies have specialties.
I think the other thing that we’re increasingly looking at amongst our existing tooling is where we don’t have an off-the-shelf tool or we don’t have something that’s already available, where do we start building those ourselves? How are we building tools to assess our configurations? Because AI is a double-edged sword in that regard; you can—if we look at it only as a threat to our enterprise and not that opportunity to quickly develop detections, configuration audits—I think that’s a lost opportunity in the security and risk community. So I wouldn’t say there’s a one-size-fits-all solution, but that’s where we are right now.
Jud Dressler: Come on, Chris, you don’t have a silver bullet ready to go?
Chris Wheeler: If I did, I would probably be very rich right now.
Data Classification and AI Utility
Jud Dressler: So yeah, appreciate that. And we do have a couple questions really talking about strategy kind of going forward, but I’m going to kind of wrap it into kind of a starting point. When we’re talking about internal AI usage, really a starting point for a lot of the security discussions really has to start with understanding what data the company has, where it’s located, who has access to it—you kind of mentioned that, Chris. Over to Paragi, when we talk about data classification in 2026, how do we distinguish between more public versus private data? And how do we ensure our private data doesn’t become part of that training dataset? Any specific steps, tools, anything you trust to help kind of monitor these data buckets?
Paragi Shah: Yeah, so when it comes to data classification, we are no longer just looking at how sensitive the data is, but we are also looking at its AI utility. In a traditional sense, when we classify data, we classify documents such as marketing materials and PRs as public. But now we want to add an additional AI handling rule to that to identify if those documents can be used by any model for training or if they should be kept private.
In case of public data—or the marketing materials and PRs—we would probably put a label that it can be used to train any model. For documents that are marked internal—such as project plans, generic emails, Slack banter—they’re internal to the company, but from an AI perspective, we want to make sure that they’re used only for inference and the AI models are not trained on that data. Similarly for confidential data, such as customer PII, financials, we want to make sure that they are processed by AI, they’re processed by siloed AI tools—I mean just self-hosted VPC models—the data doesn’t go out of the workflow.
And lastly for restricted tier, which includes our intellectual property, our trade secrets, we want to make sure that they are air-gapped and human only. So essentially when we talk about data classification, we want to make sure that we specifically address the AI utility aspect as well. Second question was how do we ensure private data doesn’t become training data? So these days, we want to make sure when we talk to these—or when we get SaaS solutions from vendors—they have the zero-retention API contracts, which is becoming more and more common in the standard enterprise agreements in 2026 as Chris mentioned earlier. So we want to make sure that we do have those contracts.
Again, modernizing data classification for AI utility is essential. The other thing is implementing guardrails—agentic guardrails—which is basically another layer—I would call it policy-as-code layer—which every repository in our code that autonomous tools use must adhere to before they execute anything. And these policies basically contain—they define forbidden commands or instructions that are not to be run on restricted directories and things like that.
Another control that we want to make sure we have is granular access control for GitHub repos; we want to make sure we only scope it out or give it the minimum required scope and also enforce sandboxing wherever necessary to preventing lateral movements during a breach. And the last thing, and which is actually the most important thing I would say and Chris touched on that as well, is supply chain vetting. So all the coding tools, agentic coding tools we have, they have a list of extensions and plugins available from various marketplaces. And we want to make sure that we have an approved list of extensions and plugins that are allowed and basically just follow a simple criteria to begin with. Make sure the marketplace is approved, the plugin itself has a good number of downloads, has good rating. That’s a quick and easy way to identify that the plugin is okay to use. But otherwise I would recommend making sure that the plugins and extensions go through the security team to make sure they are vetted and they are approved for use.
Tools that we recommend—for data classification we’re currently using a tool called Cyera which supports AI-powered data classification. Just want to note here that this mention is only for technical context and we do not constitute an official endorsement here. But there are a few security principles for robust AI governance that I would suggest. One of them is an AI gateway—there are such as Google Vertex AI or AWS Bedrock—these are basically centralized middleware hubs to govern model access and allow for zero retention and enforce security policies. That’s one of the things that we’re looking into as well.
Google has another solution called Google Model Armor—it’s basically a model-agnostic sanitization to block jailbreak attempts and harmful prompt injections in real time. Auditing and monitoring—as Chris was talking about the different ways to accomplish that—it’s integrating with our existing SOC workflows and building automation around it. Continuous discovery—we want to make sure that we have tools or solutions in place to look for shadow AI instances, unsanctioned plugins, or even over-permissioned MCP servers. And having an agent-aware SAST tool—Static Application Security Testing tool—it changes the landscape a little bit as well. It’s an advanced form of static analysis that leverages AI to analyze code, prioritize vulnerabilities, and also remediate them. Lastly, I want to mention monitoring model drift or model decay. Model decay is basically a silent degradation of your AI models, but this can over time cause the AI systems to make inaccurate and unreliable or unsafe decisions, which is directly undermining the trustworthiness of the system. We want to make sure we have a tool we can monitor them in some fashion as well.
Vendor Contracts and Risk Ownership
Jud Dressler: So it sounds like there’s some decent tools and strategies out there for internal. But again, that third-party or vendor risk, it becomes harder. Feedback clauses inside of vendor contracts are somewhat becoming a nightmare. So how do we negotiate the right to not have our data improve a vendor’s product? Mehvish?
Mehvish Femia: Great question. Feedback clauses are a nightmare if you have not vetted your proposed tool solutions early on. So remember, contracts are a backstop; they’re there in case something has gone horribly awry and you need to call your lawyer to figure out what you do from there. As part of the due diligence process for any new systems that you’re onboarding, make sure that you’re vetting very carefully what features you can and cannot turn off when they are collecting feedback, whether it’s aggregate, anonymized data, what is actually being used to feed that model.
To Paragi’s earlier point around data classification, that is just the floor. If we are able to provide a vendor our data classification levels and say, “You can only use this portion of our data,” how do you turn off the rest of it? So it actually is feedback clauses are a nightmare for Chris and his team to figure out at the outset: what does the tool do, what can it turn off, is that a negotiable or a non-negotiable? If it’s a non-negotiable, unfortunately, it means we probably should not be pursuing that particular SaaS tool as a solution for our particular environment given the types of data we’re handling.
And then at the end of the day, we’re giving feedback all day every day on every piece of electronic equipment that we’re using, whether or not we’re aware of it. It’s everywhere; you’re not going to be able to get out of providing it, but you can control it upfront through your due diligence processes. And periodically I would also recommend your existing vendors—go through that, see what they’ve developed not just as an alert in their terms of service, but see what they’ve developed and what you can actually go back in and turn off. Some of our larger providers are providing opt-out email addresses where you just send an email and say, “Hey, I choose to opt out of you using all of my data for your new fancy thing.” So something to consider as you build out your processes and update them.
Jud Dressler: That’s great advice. Mehvish, I want to stay with you here. The regulatory environment is no longer the Wild West, but it essentially has become a jungle of differing regulations. So between the EU AI Act, state-level laws like those in California, Colorado, the compliance burdens become kind of heavy. So for a CISO managing a global footprint, which regulation is kind of the high-water mark right now? What should be at the top of their M&A due diligence checklist?
Mehvish Femia: So I would recommend using the EU AI regulations because they are the most stringent that I’ve come across, and using that as a baseline to evaluate your processes. So also recommend doing internal audits for: are my processes compliant with the most stringent standard? So I only have to build one standard and not 50 standards. I come from an insurance background having to set up systems for compliance with 50 state-specific regulatory requirements can become somewhat of a coding nightmare and then requires a person to sit there and make sure it’s compliant ongoing. That’s a very heavy regulatory administrative burden on companies. Build to the most restrictive standard if you can. However, restrictive standards are just the floor. Paragi was talking earlier about data classification; there’s data classification requirements inside a lot of these regulations of “this type of data is considered PII,” “this type of data is considered personal and you cannot use it.” But that’s not all the data businesses are collecting. So you also want to have a view on: what is it that we have, where does it live, and how are we classifying it?
Jud Dressler: Thank you. When we’re talking about overall risk, a lot of times boards are asking about AI risk. Kind of a loaded question, but Chris: Who owns AI risk?
Chris Wheeler: You gave me the toughest one, Jud. So I think the cop-out answer here is “it depends,” but I do think there is a little bit of nuance and I’ll get into it. So I think depending on your particular organization—and I talked about a couple different models—are you predominantly offering your AI services to clients? Are you providing an AI-specific platform? I think in organizations like that, your risk is going to live more within that CTO and legal organization. And then if your risk is more on what your employees are using, I think a lot more of that’s going to live within your information security organization, your legal organization. But it really just depends on where you are taking on the most risk. Because the reality is in most companies, it’s not any one person’s job; it’s a shared responsibility. And we’re seeing a lot of clamor around: Is there going to be a new Chief AI Officer as a role reporting directly to the CEO and reporting directly to the boards? We’ll see if we get there. But right now, I think it largely depends on your particular organization and your risk tolerance. But I also think we’re going to do a poll in a minute here to see if we’re onto something or maybe other people have opinions. But I think the reality right now is everybody is spreading this out amongst their various executives, and it depends on the service you’re offering.
Conclusion and Next Steps
Jud Dressler: Appreciate that, Chris. We are going to throw up that poll. So let us know, who currently owns this within your organization? Who has that ultimate responsibility? Go ahead and give us your response; we’ll get to the results when they’re ready.
We only have a few minutes left, so I want to wrap things up. There is a sentiment out there that we can kind of lessen data privacy risk through AI; some call that a fallacy. But really again, like I started the conversation with, we want to move from being kind of safe, which kind of implies that static state, to being resilient. So I’ll really open it to the panel: If you had to prioritize one AI resilient move within the next 90 days, what would it be?
Chris Wheeler: Happy to start on this one. And you saw in one of the questions we got on the side here: “Are we safe?” And I think the real question in 2026 is, like Jud mentioned, “Are we resilient?” And so the things that come to mind for me we’ve touched on are just visibility: Do you even know what’s going on in your environment? Okay, if you have visibility, how do you detect when you might have some type of misconfiguration? And then your responses therein. So I think the idea of being AI resilient comes down to how quickly you can contain things that are operating way faster than humans ever did them. So it’s making sure that your procedures and your detections are up to date in that regard.
The second thing that I think is kind of a foundational piece, Paragi has some really good points on just where your data is and how you’re doing those reviews in the first place. Are you accounting for non-human identities and service accounts and API keys and all these things that agents or non-humans are using to access your systems? If you’re still going and just saying, “What does Chris have access to, what does Paragi have access to, what does Mehvish have access to?” and looking at that from a very standard access review process, I think you’re only hitting a very small segment of what has access to data, has access to take actions. So it’s looking at those underlying identity and access management principles and how you’re reviewing them on a periodic basis would be one of my biggest recommendations.
Jud Dressler: Appreciate that. Other panelists? We have time for kind of one more short, quick answer.
Paragi Shah: Yeah, I can go. Yeah, everything that Chris said plus guardrails. Guardrails is something that I feel are essential. We want to make sure that we have the basic set of principles or rules that govern the agentic coding tools. And of course, as Chris said, having the right set of controls in place for auditing and monitoring, which includes by integrating with our existing SOC workflows and through automation.
Jud Dressler: All right. Ingrid, can we get the poll results? There you go. The poll results so you can kind of see where everybody stood.
Ladies and gentlemen, Chris, Paragi, Mehvish, thank you for your participation, thank you for your candor. Appreciate it. To our audience, the goal again isn’t to build a wall around AI, but to build a system that can take a hit and keep functioning. We just dropped the link for the next webinar to register in the channel. That will be April 30th at 11:00 AM Eastern Time, and we’re going to continue this AI discussion, really on AI risk with our underwriting side. So we’re going to talk about bridging the gaps between identification, mitigation, and effective insurance transfer. We’re going to talk with our Chief Underwriting Officer, Maria Long, and our Global Director of Insurance Product, Michelle Worl. We’re going to get their deep underwriting perspective on AI, explore insights into present and emerging risks, and give an outlook on coverage expectations and potential gaps. We look forward to seeing you then. Thank you all.


