Threatonomics

Would you fall for a live deepfake?

Increasingly simple to execute, is this the new face of social engineering?

by Emma McGowan , Senior Writer
Published

The Office of Senate Security revealed last week that the head of the Senate Foreign Relations Committee was targeted in a deep fake video call. An unknown person, claiming to be the former Ukrainian Minister of Foreign Affairs, Dmytro Kuleba, lured the Senator onto a Zoom call. The attack was thwarted when the Senator and staff felt the behavior of the caller was not consistent with past interactions. 

While deepfake video calls still sound like science fiction to many, we expect that the recent boom in generative artificial intelligence products will lead to an increase in deepfake video call fraud. And it won’t have only governmental implications: Earlier this year a Hong Kong-based finance worker of a multinational company was tricked into sending $25.6 million to scammers after being brought into a conference call with a group of colleagues–including the CFO. A later investigation revealed that everyone on the call, aside from the victim, were deepfakes. 

The use of deepfakes isn’t necessarily an issue directly related to cyber insurance unless the fake leads to a cyber crime. However, social engineering is still the most effective point of failure for bad actors, and we are often called on to provide our perspective on how to spot fakes and avoid fraud. 

As more businesses embrace remote work and virtual meetings, the threat of deepfake impersonations on platforms like Zoom, Microsoft Teams, and Google Meet is only growing. But how do you combat a technology like this, especially considering how rapidly it is improving? 

Tips for avoiding fraud

Some of the advice for avoiding fraud is the same, regardless of how that fraud is perpetrated. When dealing with highly sensitive transactions of either money or personal data:

  • Never assume the person you are communicating with is the person they claim to be. To confirm their identity, call the person on a number or via a method you’ve previously used.
  • Make sure any large financial transactions don’t have a single point of contact in order to go through. This blocks a single person–whether a bad actor or rogue employee–from having full control of both payment initiation and approval. Utilize multi-factor authentication methods (MFA), like biometric checks, one-time passwords, or digital tokens to help ensure the identity of all participants.
  • At the same time, limit the number of people who are authorized to make financial transactions. Keep the names of those employees confidential. 
  • Regularly train and test employees and vendors on complying with your protocols, as well as ways to identify fraudulent transactions.

Video call deepfakes, however, bring their own set of challenges. For example, in the Hong Kong fraud above, the employee was originally suspicious of the request, which is why the video call took place. Pre-generative AI, that likely would have been sufficient to protect his company against fraud. But in this post-consumer genAI world–in which multiple deepfake applications advertise their capabilities as working in real time, particularly during video calls–it’s no longer enough.

How to detect a deepfake video call

The Massachusetts Institute of Technology (MIT) Media Lab created guidelines for users to visually detect deepfakes. They also acknowledge that these guidelines primarily work against amateur deepfakes and may not be as effective in detecting advanced applications. However, considering the fact that there is not yet technology available to protect against this type of fraud (and therefore education is your best defense against loss), they’re a great place to start educating yourself and your employees about deepfake video fraud.

Here are MIT’s tips for detecting deepfakes:

  • Focus on the face. Most high-quality DeepFakes involve altering the facial features.
  • Observe the cheeks and forehead. Does the skin appear unusually smooth or overly wrinkled? Are the skin’s age characteristics consistent with the hair and eyes? DeepFakes can often be mismatched in these areas.
  • Pay attention to the eyes and eyebrows. Do the shadows fall where you’d expect them to? DeepFakes may struggle to accurately capture the natural physics of light and shadow.
  • Check the glasses. Is there a glare, or perhaps too much glare? Does the glare shift appropriately as the person moves? DeepFakes often fail to replicate the natural behavior of lighting.
  • Inspect the facial hair or lack of it. Does the beard or mustache look authentic? DeepFakes might add or remove facial hair but often fail to make it appear fully natural.
  • Notice any facial moles. Does the mole look convincing?
  • Watch the blinking. Is the person blinking too frequently or too little?
  • Focus on the lip movements. Some deepfakes rely on lip-syncing. Do the lip movements align naturally with the speech?

MIT, in conjunction with Northwestern University, also hosts a site that allows users to train themselves to detect deepfakes. The site, “DetectFakes,” presents various images that users determine are AI-generated or real.

Utilizing deepfake detection applications

Reviewing content through deepfake detection applications can also assist in identifying potential generative AI threats that human detection would otherwise miss. Here are two deepfake detection programs you can experiment with:

Sensity

This is one of the leading deepfake detectors and AI applications on the market. Sensity advertises that its model is trained via databases containing millions of Generative Adversarial Networks (GANs)-generated images to fine-tune its detection algorithms. Sensity’s training has reportedly been used against Dall-E, FaceSwap, Stable Diffusion, and Midjourney.

WeVerify

Announced in 2019, WeVerify lets users upload media such as images and videos to determine the content’s probability of being a deepfake. WeVerify performs cross-modal verification and analyzes social media and human content verification to conclude media authenticity.

As deepfake technology advances, it’s not just pushing the limits of security—it’s shaking the very foundation of trust in our digital interactions. What used to be a clear line between real and fake is now blurring at an alarming rate. And this isn’t just a technical problem. It’s a trust problem. 

The real challenge isn’t simply in spotting these fakes anymore; it’s about learning how to navigate a world where what we see can’t always be trusted. To stay ahead, businesses and leaders need more than just detection tools—they need to foster a mindset of constant vigilance and adaptability. In a reality where deception grows more sophisticated by the day, resilience isn’t just about defense; it’s about how quickly and smartly we respond to the unknown.

You might also like

How our 2025 cybersecurity predictions held up

At the start of 2025, we made some bold predictions about the cyber landscape. Now, as we look back at the year that was, it’s time to see how accurate our crystal ball really was. Dr. Ann Irvine, Chief Data and Analytics Officer at Resilience, sat down with us to evaluate what happened—and what surprised […]

Cybersecurity and insurance predictions for 2026

The cyber threat landscape is evolving at breakneck speed, and the challenges organizations will face in 2026 look dramatically different from those of even a year ago. To understand what’s coming, we gathered insights from Resilience’s leading cybersecurity and cyber insurance experts: Dr. Ann Irvine, Chief Data and Analytics Officer; Chris Wheeler, CISO; David Meese, […]

Risk-based vendor tiering that actually works

Welcome back to the Resilience third-party management series. In our first three posts, we covered why third-party vendor discovery matters, how to locate vendors across your environment, and which high-risk vendor categories most organizations overlook. Now we turn to the next step: prioritizing those vendors based on actual cyber risk—not contract spend. Most vendor management […]

The vendors you’re probably missing

While the seven data streams from our previous post will capture the majority of your vendor relationships, they’re primarily designed to find digital services and traditional procurement relationships. Today, we’re exploring the vendor categories that fall through the cracks of most discovery programs, as well as why they often represent some of your highest-risk relationships. […]

How to prepare your organization for a post-quantum world

Quantum computing is on the horizon, and with it comes a seismic shift in how organizations must think about cybersecurity risk. The ability of future quantum machines to break today’s cryptographic protections, what we call quantum decryption, could undermine the trust, confidentiality, and resilience of digital business. This briefing series distills a highly technical topic […]

When will quantum decryption become practical?

As part of Cybersecurity Awareness Month, we’re publishing this three-part series that distills a highly technical topic into strategic insights for leaders. Part 1 explained why quantum decryption poses a threat to current encryption systems. Part 2 lays out credible timelines for when the disruption may arrive. Part 3 will offer practical guidance on how […]