There's No Faking It: Deepfake Fraud Is a Real Problem
Summary:
Scammers are using realistic, AI-generated deepfakes for the purpose of committing identity fraud and other crimes. Here’s how the technology is being used by bad actors, how to recognize deepfake audio and video, and how to protect yourself against this evolving threat.
AI-generated audio and video are being used to trick businesses and individuals
The person at the other end of that video call certainly looks and sounds legitimate. Maybe it’s someone you’ve bonded with on a dating site, or maybe it’s a semi-distant relative or remote work colleague. Yes, it’s odd that they’re asking you to send them money or provide sensitive personal information, but you trust them.
Just one problem: They’re not real. Their image and voice have been generated through artificial intelligence (AI), and are being controlled behind the scenes by a scammer. What you’re experiencing is a deepfake, a rapidly evolving technology often used for malicious acts.
The U.S. Government Accounting Office (GAO) defines a deepfake as video, photography, or audio that "seems real but has been manipulated with AI. The underlying technology can replace faces, manipulate facial expressions, synthesize faces, and synthesize speech."
More and more criminals are using AI deepfakes to commit identity fraud or pry money and data from businesses and individuals. The digital verification platform Sumsub reported an astonishing 1740% jump in deepfake-related fraud attempts in North America between 2022 and 2023.
How deepfakes are being used in scams
By creating a deepfake persona, fraudsters can trick people into believing they’re interacting with someone they know or want to know. This builds trust, making it easier for the scammer to manipulate the victim. Cybercriminals can also utilize deepfakes to create compromising material for the purpose of extortion. They can use an AI bot to, for example, take a brief snippet of a person’s real voice and “clone” it to produce an authentic-sounding facsimile; the faked voice can then be made to say just about anything.
The majority of deepfake fraud cases thus far have targeted businesses. Even large global companies can fall for these scams: In one recent example, an employee at a multinational design and engineering firm was tricked by a deepfake video call into transferring $25 million of the company’s funds to fraudsters. Many bad actors, meanwhile, are using deepfake audio and video in attempts to gain access to company data, which could result in breaches of customer information.
As this technology grows more sophisticated, it’s also getting easier to use—which means it’s becoming increasingly popular as a method to defraud individuals. Deepfakes have already made their way into the world of romance scams, according to a recent report in Wired. The article described how a crew of scammers used "deepfakes and face-swapping to ensnare victims in romance scams, building trust with victims using fake identities, before tricking them into parting with thousands of dollars."
Tips for detecting deepfake video and audio
While a number of deepfake detection tools currently exist, many are only available to businesses. Also, most are designed to analyze recordings, and cannot help in real time during audio or video calls. To recognize deepfakes in real time, you’ll most likely have to rely on your own powers of observation.
The MIT Media Lab offered tips on how to determine whether a person seen on video is a deepfake. Zero in on elements of the person’s face, they advised. This includes:
- • Cheeks and forehead – "Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes?"
- • Eyes and eyebrows – "Do shadows appear in places that you would expect?"
- • Eyeglasses – "Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves?"
- • Blinking – "Does the person blink enough or too much?"
- • Lip movements – "Some deepfakes are based on lip syncing. Do the lip movements look natural?"
In an article for PolitiFact, Manjeet Rege, director of the Center for Applied Artificial Intelligence at the University of St. Thomas, and Siwei Lyu, a computer science and engineering professor at the University at Buffalo, offered advice on listening for clues that a voice might actually be an audio deepfake. These include “irregular or absent breathing noises, intentional pauses and intonations, along with inconsistent room acoustics.”
Use your common sense
One thing is clear: Deepfake technology is evolving at such speed that it will become progressively more difficult to tell fiction from reality. Today you might be able to spot a weird glitch in a person’s face on video, or a strange vocal pattern on a call. But those flaws might not be as noticeable a year or two from now.
Beyond the observational tips offered here, your best defense is to use common sense. If someone contacts you by phone or video—whether or not it’s a person you seemingly know and trust—and makes an unusual request or demand involving money or sensitive information, step back and assess the situation. Do whatever you can to independently verify that what the person is telling you is true. As AI expert Manjeet Rege said in the PolitiFact interview, “Healthy skepticism is warranted given how realistic this emerging technology has become.”
About IDX
We're your proven partner in digital privacy protection with our evolving suite of privacy and identity products.