6 minute read

AI Is Being Used to Impersonate People You Trust

Summary:
AI deepfake technology is making it easier for scammers to persuasively mimic the voices and faces of others—increasing the risk of consumer identity theft and financial loss. Here are examples of groups being impersonated, plus tips for protecting yourself against AI scams.

Scammers are pretending to be everyone from soldiers to celebrities

AI Is Being Used to Impersonate People You Trust

These days, you often can’t even trust your own eyes or ears. Thanks to rapid advances in AI and related technologies, scammers can successfully fake just about anything—people’s faces and voices, legitimate phone numbers, company websites, and more. The content can look and sound so convincing that even the savviest consumers get fooled into sending money or personal information to a fraudster who’s hiding behind someone else’s persona.

AI-generated deepfake videos, photos, and audio are at the heart of these scams, and they’re inflicting serious financial harm. In the first quarter of 2025 alone, deepfake-driven fraud resulted in more than $200 million in losses.

These schemes are so effective because they trick people into thinking they’re interacting with a real person they know, trust, and respect. Here are just some of the groups that scammers are impersonating with the help of AI.

Grandchildren and other relatives

The Federal Communications Commission (FCC) warns about an AI-driven scam targeting grandparents, in which fraudsters call and impersonate a grandchild or other close relative. The caller claims to have been arrested or involved in a car accident, and is in desperate need of immediate financial help. The caller may urge the grandparent to not tell anyone else about this situation. They might also hand the phone to someone claiming to be a lawyer, court official, or law enforcement officer, who instructs the grandparent on how to send payment. None of it is real—it’s an elaborate scheme to steal money and personal information.

How are scammers able to masquerade as close relatives? It starts with the fact that many young people’s voices can be heard in video clips posted online. Scammers can take the voice clip, “clone” it using AI, and then “speak” in that voice using masking technology. Combined with stolen data or personal details pulled from social media, they can often create a convincing facsimile of a real relative. Scammers also can “spoof” the caller ID so that the call appears to be coming from a trusted number.

Because this scam is designed to prey on the fears of grandparents about their loved ones—and because AI technology is getting so effective at replicating voices—it has become alarmingly common.

      Military service members

      Military veterans and active-duty service members have to deal with AI scams on two different fronts. First, they are frequently targeted by bad actors—including foreign adversaries—through AI-generated deepfake videos and audio. The Military Times reports that the U.S. military community nationwide reported nearly 43,000 imposter scams in 2024, costing troops and their families an estimated $178 million.

      Second, they are often impersonated as a way to target civilians in online dating scams. Romance scammers trade on the trust and respect that people feel toward military service members. Using the AI-generated persona of an active-duty soldier, they often claim to be stationed overseas—an excuse for why they can’t meet in person. Once they’ve struck up an online relationship with a person and earned their trust, they urge the victim to send them money or sensitive personal information.

      To show how easy it is for malicious groups and individuals to impersonate the military community, the advocacy group We the Veterans & Military Families has produced a public-service video campaign featuring fake, AI-generated service members.

        Celebrities and influencers

        AI deepfake technology is allowing fraudsters to realistically impersonate celebrities and social media influencers, often targeting victims through phony endorsements of sketchy products, brands, or nonprofits. As just one example, Oprah Winfrey recently appeared in a number of social media videos promoting a weight-loss supplement. Except it wasn’t really Oprah—it was an AI-generated fake, and the sales pitch led to a dubious website. Other examples have included an AI clone of chef Gordon Ramsay offering free cookware, and an AI clone of Kim Kardashian asking people to send her money to help victims of the California wildfires.

        AI-generated celebrities are also prevalent in romance scams. A French woman was conned out of $850,000 by a scammer pretending to be Brad Pitt. More recently, the news station KTLA reported on a Southern California woman who believed she was being pursued romantically by a soap opera TV actor. Using hyper-realistic AI deepfake videos, the scammer manipulated the victim into not only sending $81,000 in cash, but also selling her family’s home and handing over the proceeds.

        Law enforcement officials

        Similar to military members, law enforcement officials are popular sources of AI exploitation by bad actors, because of the authority these officials command. Getting a harsh, urgent message from someone who looks or sounds exactly like a real police chief or deputy, demanding payment for some alleged infraction, can be distressing.

        In one recent law enforcement scam, AI technology was used to clone the voice of the police chief of Salt Lake City, Utah, with a message claiming that the victim owed $100,000 to the government. In another, fraudsters used AI to mimic the voices of local law enforcement officials in counties in Virginia, threatening victims over alleged court fees or unpaid fines.

        How to avoid scams involving AI impersonation

        Here are some tips on how to protect yourself from identity theft and financial loss in AI-driven scams:

        • If you’re watching a video in which a celebrity makes a sales pitch, don’t automatically assume it’s real. Check trusted sources to verify its authenticity.
        • If someone contacts you unexpectedly and makes an unusual request or demand for money or personal information—even if they look or sound like a person you know—verify their identity using trusted sources.
        • Scammers sometimes attempt to gather samples of a victim’s voice for use in financial or identity fraud. Screen your calls and avoid talking by phone with unknown contacts. If you’re not expecting a call from a specific person or company, let it go to voicemail.
        • If you get a surprise call from someone claiming to be a close relative with an urgent plea for money, hang up and call or text the real person at their real phone number. If they can’t be reached, call or text another trusted relative to verify the story.
        • Regardless of who’s contacted you, always proceed with extreme caution if you’re being pressured to immediately send money or provide sensitive personal information.
        • Be particularly wary if someone urges or commands you to pay them through a mobile payment app, wire transfer, gift card, or money order. That’s often a sign of fraud.
        • If the person asking you for money or personal information insists that you keep the discussion confidential, that’s a red flag.
        • If you suspect a scam, report it to local law enforcement.

        Understand that AI deepfake technology has evolved to the point where it can convincingly impersonate just about anyone. When faced with a person who unexpectedly asks or directs you to send money or personal information, it’s safest to step back and do your homework: Seek out trusted friends, relatives, or news sources to see if it’s real or fake.

        About IDX

        We're your proven partner in digital privacy protection with our evolving suite of privacy and identity products.