FAIR vs Resilience
Threatonomics

Would you fall for a live deepfake?

Increasingly simple to execute, is this the new face of social engineering?

by Emma McGowan , Senior Writer
Published

The Office of Senate Security revealed last week that the head of the Senate Foreign Relations Committee was targeted in a deep fake video call. An unknown person, claiming to be the former Ukrainian Minister of Foreign Affairs, Dmytro Kuleba, lured the Senator onto a Zoom call. The attack was thwarted when the Senator and staff felt the behavior of the caller was not consistent with past interactions. 

While deepfake video calls still sound like science fiction to many, we expect that the recent boom in generative artificial intelligence products will lead to an increase in deepfake video call fraud. And it won’t have only governmental implications: Earlier this year a Hong Kong-based finance worker of a multinational company was tricked into sending $25.6 million to scammers after being brought into a conference call with a group of colleagues–including the CFO. A later investigation revealed that everyone on the call, aside from the victim, were deepfakes. 

The use of deepfakes isn’t necessarily an issue directly related to cyber insurance unless the fake leads to a cyber crime. However, social engineering is still the most effective point of failure for bad actors, and we are often called on to provide our perspective on how to spot fakes and avoid fraud. 

As more businesses embrace remote work and virtual meetings, the threat of deepfake impersonations on platforms like Zoom, Microsoft Teams, and Google Meet is only growing. But how do you combat a technology like this, especially considering how rapidly it is improving? 

Tips for avoiding fraud

Some of the advice for avoiding fraud is the same, regardless of how that fraud is perpetrated. When dealing with highly sensitive transactions of either money or personal data:

  • Never assume the person you are communicating with is the person they claim to be. To confirm their identity, call the person on a number or via a method you’ve previously used.
  • Make sure any large financial transactions don’t have a single point of contact in order to go through. This blocks a single person–whether a bad actor or rogue employee–from having full control of both payment initiation and approval. Utilize multi-factor authentication methods (MFA), like biometric checks, one-time passwords, or digital tokens to help ensure the identity of all participants.
  • At the same time, limit the number of people who are authorized to make financial transactions. Keep the names of those employees confidential. 
  • Regularly train and test employees and vendors on complying with your protocols, as well as ways to identify fraudulent transactions.

Video call deepfakes, however, bring their own set of challenges. For example, in the Hong Kong fraud above, the employee was originally suspicious of the request, which is why the video call took place. Pre-generative AI, that likely would have been sufficient to protect his company against fraud. But in this post-consumer genAI world–in which multiple deepfake applications advertise their capabilities as working in real time, particularly during video calls–it’s no longer enough.

How to detect a deepfake video call

The Massachusetts Institute of Technology (MIT) Media Lab created guidelines for users to visually detect deepfakes. They also acknowledge that these guidelines primarily work against amateur deepfakes and may not be as effective in detecting advanced applications. However, considering the fact that there is not yet technology available to protect against this type of fraud (and therefore education is your best defense against loss), they’re a great place to start educating yourself and your employees about deepfake video fraud.

Here are MIT’s tips for detecting deepfakes:

  • Focus on the face. Most high-quality DeepFakes involve altering the facial features.
  • Observe the cheeks and forehead. Does the skin appear unusually smooth or overly wrinkled? Are the skin’s age characteristics consistent with the hair and eyes? DeepFakes can often be mismatched in these areas.
  • Pay attention to the eyes and eyebrows. Do the shadows fall where you’d expect them to? DeepFakes may struggle to accurately capture the natural physics of light and shadow.
  • Check the glasses. Is there a glare, or perhaps too much glare? Does the glare shift appropriately as the person moves? DeepFakes often fail to replicate the natural behavior of lighting.
  • Inspect the facial hair or lack of it. Does the beard or mustache look authentic? DeepFakes might add or remove facial hair but often fail to make it appear fully natural.
  • Notice any facial moles. Does the mole look convincing?
  • Watch the blinking. Is the person blinking too frequently or too little?
  • Focus on the lip movements. Some deepfakes rely on lip-syncing. Do the lip movements align naturally with the speech?

MIT, in conjunction with Northwestern University, also hosts a site that allows users to train themselves to detect deepfakes. The site, “DetectFakes,” presents various images that users determine are AI-generated or real.

Utilizing deepfake detection applications

Reviewing content through deepfake detection applications can also assist in identifying potential generative AI threats that human detection would otherwise miss. Here are two deepfake detection programs you can experiment with:

Sensity

This is one of the leading deepfake detectors and AI applications on the market. Sensity advertises that its model is trained via databases containing millions of Generative Adversarial Networks (GANs)-generated images to fine-tune its detection algorithms. Sensity’s training has reportedly been used against Dall-E, FaceSwap, Stable Diffusion, and Midjourney.

WeVerify

Announced in 2019, WeVerify lets users upload media such as images and videos to determine the content’s probability of being a deepfake. WeVerify performs cross-modal verification and analyzes social media and human content verification to conclude media authenticity.

As deepfake technology advances, it’s not just pushing the limits of security—it’s shaking the very foundation of trust in our digital interactions. What used to be a clear line between real and fake is now blurring at an alarming rate. And this isn’t just a technical problem. It’s a trust problem. 

The real challenge isn’t simply in spotting these fakes anymore; it’s about learning how to navigate a world where what we see can’t always be trusted. To stay ahead, businesses and leaders need more than just detection tools—they need to foster a mindset of constant vigilance and adaptability. In a reality where deception grows more sophisticated by the day, resilience isn’t just about defense; it’s about how quickly and smartly we respond to the unknown.

You might also like

How does Resilience establish the probabilities presented in my LEC?

Managing risk successfully at any level requires an understanding of a concept called “probability.” As both an insurance company (risk transfer) and a cyber risk management company, Resilience relies on understanding probabilities to price our services and to guide our clients to greater levels of cyber resilience. As we often receive questions from our clients […]

Moving beyond heat maps for better risk management

Heat maps are among the most widely used—and debated—tools for risk managers worldwide to communicate risks in their registries or project portfolios. Despite their popularity, we advise leaders seeking transparency in discussing risk and value to avoid relying on them. What are heat maps? Risk managers often use heat maps (or risk matrices) to represent […]

Breaking Lemonade: Understanding Value at Risk

I talk a lot about value-at-risk among my colleagues, with our customers, and the broader market. Value-at-risk may be the single most important measure to grasp, without which one cannot accurately measure risk transfer, excess risk, risk acceptance, and return on controls. Yet, these are all important concepts that leadership in modern organizations need to […]

Artificial Intelligence for Cyber Resilience

AI tools are shifting the calculus for cyber defense by enhancing key areas such as vulnerability mapping, breach detection, incident response, and penetration testing. This integration could help an organization bolster its cyber resilience against an ever-evolving threat landscape. AI tools could automate the discovery and monitoring of vulnerabilities, providing real-time updates of an organization’s […]

cyber resilience framework

AI and Misuse

Welcome to part two in our series on AI and cyber risk. Be sure to read the first installment “What you need to know: Artificial Intelligence at the Heart of Cyber,” here. Key takeaways Background In February 2024, OpenAI – in collaboration with Microsoft— tracked adversaries from Russia, North Korea, Iran, and China, leveraging their […]

cyber resilience framework

Cybersecurity Incidents & Trends in Canada

Executive Summary Emerging cyber threats increasingly target Canadian organizations, government agencies, and individuals, with recent attacks revealing sophisticated tactics by threat actors. Threat actors delivered the Formbook infostealer to companies via emails that posed as job candidates. Meanwhile, the Chameleon Trojan attacked Canadian financial institutions and a restaurant chain by masquerading as legitimate apps. Cybercriminals […]