Threatonomics

Would you fall for a live deepfake?

Increasingly simple to execute, is this the new face of social engineering?

by Emma McGowan , Senior Writer
Published

The Office of Senate Security revealed last week that the head of the Senate Foreign Relations Committee was targeted in a deep fake video call. An unknown person, claiming to be the former Ukrainian Minister of Foreign Affairs, Dmytro Kuleba, lured the Senator onto a Zoom call. The attack was thwarted when the Senator and staff felt the behavior of the caller was not consistent with past interactions. 

While deepfake video calls still sound like science fiction to many, we expect that the recent boom in generative artificial intelligence products will lead to an increase in deepfake video call fraud. And it won’t have only governmental implications: Earlier this year a Hong Kong-based finance worker of a multinational company was tricked into sending $25.6 million to scammers after being brought into a conference call with a group of colleagues–including the CFO. A later investigation revealed that everyone on the call, aside from the victim, were deepfakes. 

The use of deepfakes isn’t necessarily an issue directly related to cyber insurance unless the fake leads to a cyber crime. However, social engineering is still the most effective point of failure for bad actors, and we are often called on to provide our perspective on how to spot fakes and avoid fraud. 

As more businesses embrace remote work and virtual meetings, the threat of deepfake impersonations on platforms like Zoom, Microsoft Teams, and Google Meet is only growing. But how do you combat a technology like this, especially considering how rapidly it is improving? 

Tips for avoiding fraud

Some of the advice for avoiding fraud is the same, regardless of how that fraud is perpetrated. When dealing with highly sensitive transactions of either money or personal data:

  • Never assume the person you are communicating with is the person they claim to be. To confirm their identity, call the person on a number or via a method you’ve previously used.
  • Make sure any large financial transactions don’t have a single point of contact in order to go through. This blocks a single person–whether a bad actor or rogue employee–from having full control of both payment initiation and approval. Utilize multi-factor authentication methods (MFA), like biometric checks, one-time passwords, or digital tokens to help ensure the identity of all participants.
  • At the same time, limit the number of people who are authorized to make financial transactions. Keep the names of those employees confidential. 
  • Regularly train and test employees and vendors on complying with your protocols, as well as ways to identify fraudulent transactions.

Video call deepfakes, however, bring their own set of challenges. For example, in the Hong Kong fraud above, the employee was originally suspicious of the request, which is why the video call took place. Pre-generative AI, that likely would have been sufficient to protect his company against fraud. But in this post-consumer genAI world–in which multiple deepfake applications advertise their capabilities as working in real time, particularly during video calls–it’s no longer enough.

How to detect a deepfake video call

The Massachusetts Institute of Technology (MIT) Media Lab created guidelines for users to visually detect deepfakes. They also acknowledge that these guidelines primarily work against amateur deepfakes and may not be as effective in detecting advanced applications. However, considering the fact that there is not yet technology available to protect against this type of fraud (and therefore education is your best defense against loss), they’re a great place to start educating yourself and your employees about deepfake video fraud.

Here are MIT’s tips for detecting deepfakes:

  • Focus on the face. Most high-quality DeepFakes involve altering the facial features.
  • Observe the cheeks and forehead. Does the skin appear unusually smooth or overly wrinkled? Are the skin’s age characteristics consistent with the hair and eyes? DeepFakes can often be mismatched in these areas.
  • Pay attention to the eyes and eyebrows. Do the shadows fall where you’d expect them to? DeepFakes may struggle to accurately capture the natural physics of light and shadow.
  • Check the glasses. Is there a glare, or perhaps too much glare? Does the glare shift appropriately as the person moves? DeepFakes often fail to replicate the natural behavior of lighting.
  • Inspect the facial hair or lack of it. Does the beard or mustache look authentic? DeepFakes might add or remove facial hair but often fail to make it appear fully natural.
  • Notice any facial moles. Does the mole look convincing?
  • Watch the blinking. Is the person blinking too frequently or too little?
  • Focus on the lip movements. Some deepfakes rely on lip-syncing. Do the lip movements align naturally with the speech?

MIT, in conjunction with Northwestern University, also hosts a site that allows users to train themselves to detect deepfakes. The site, “DetectFakes,” presents various images that users determine are AI-generated or real.

Utilizing deepfake detection applications

Reviewing content through deepfake detection applications can also assist in identifying potential generative AI threats that human detection would otherwise miss. Here are two deepfake detection programs you can experiment with:

Sensity

This is one of the leading deepfake detectors and AI applications on the market. Sensity advertises that its model is trained via databases containing millions of Generative Adversarial Networks (GANs)-generated images to fine-tune its detection algorithms. Sensity’s training has reportedly been used against Dall-E, FaceSwap, Stable Diffusion, and Midjourney.

WeVerify

Announced in 2019, WeVerify lets users upload media such as images and videos to determine the content’s probability of being a deepfake. WeVerify performs cross-modal verification and analyzes social media and human content verification to conclude media authenticity.

As deepfake technology advances, it’s not just pushing the limits of security—it’s shaking the very foundation of trust in our digital interactions. What used to be a clear line between real and fake is now blurring at an alarming rate. And this isn’t just a technical problem. It’s a trust problem. 

The real challenge isn’t simply in spotting these fakes anymore; it’s about learning how to navigate a world where what we see can’t always be trusted. To stay ahead, businesses and leaders need more than just detection tools—they need to foster a mindset of constant vigilance and adaptability. In a reality where deception grows more sophisticated by the day, resilience isn’t just about defense; it’s about how quickly and smartly we respond to the unknown.

You might also like

What enterprises over $10 billion need to know about managing cyber risk

The role of the Chief Information Security Officer has undergone a profound transformation from a purely technical role to a strategic business one in recent years. For CISOs operating in organizations with over $10 billion in revenue—a segment that Resilience has recently expanded its cyber risk solutions to serve—the shift comes with unique pressures and […]

How to create an effective Incident Response Plan

Cyberattacks are no longer a distant threat—they are a certainty. Whether it’s a ransomware attack, data breach, or insider threat, organizations must be prepared to respond quickly and effectively. Without a solid plan in place, even a minor security incident can spiral into a major crisis, leading to financial losses, reputational damage, and regulatory penalties. […]

Understanding the ClickFix attack

Imagine a cyberattack so simple yet so deceptive that all it takes is three keystrokes to compromise your system. This is the reality of the ClickFix attack, a threat that Resilience threat researchers have observed in the wild since 2024 and that seems to be ramping up in recent weeks. ClickFix cleverly manipulates users into […]

How MFA can be hacked

Multi-factor authentication (MFA) represents a significant improvement over single-factor authentication, adding an extra layer of security that has become standard practice across industries. It’s become so popular that many organizations and individuals believe implementing MFA makes their accounts nearly impenetrable to attackers. After all, even if someone steals your password, they would still need access […]

What is the ROC?

The cybersecurity industry thrives on headlines. A major software vulnerability, a ransomware attack, or a widespread outage—each event sends ripples of concern through the digital ecosystem, often accompanied by a rush to assign blame and predict catastrophic consequences.  However, the reality of cyber risk is far more nuanced than these attention-grabbing headlines suggest. The key […]

Quantifying cyber risk for strategic business alignment

In Resilience’s recent webinar, “Quantifying Cyber Risk for Strategic Business Alignment,” (which I hosted along with my colleagues Eric Woelfel, Senior Cybersecurity Engineer, and Erica Leise, Senior Security Engineer) we wanted to tackle a common—and often limiting—mindset in cybersecurity. It’s a mindset I’ve seen again and again in my decade and half building machine learning […]