4 minute read

ChatGPT Is Exciting, But Is It Making Things Easier for Scammers?

Summary: Most users of the AI-driven chat platform ChatGPT are there for practical purposes or to have fun. A new alert, however, shows that cybercriminals are looking to use it for phishing attacks, malware, identity fraud, and more. Here’s how ChatGPT works, what the risks are, and how you can keep your privacy and identity protected.

The popular AI chat tool ChatGPT is good at mimicking humans, which is why criminals love it

Image of AI to represent the fraud and phishing threats of ChatGPT

Artificial intelligence (AI) technology has become so sophisticated, it’s getting increasingly hard to tell whether the email you just received—or the essay you just read, or the song you heard online—was written by a human or a robot. (For the record, this article was written by a human.)

The latest and most popular AI-powered platform is a chat tool called ChatGPT. You may have heard of it; you might even be using it. In the first two months after its November 2022 launch, ChatGPT reached 100 million users, making it the fastest-growing computer application of all time.

With an ability to mimic human communication and self-improve at an unimaginably fast rate, AI tools like ChatGPT can have many benefits to society, in areas ranging from healthcare to education. But as with any technological advance, they also raise major ethical and safety concerns, not the least of which is the potential for bad actors to harness them for their own malicious needs, whether for ransomware, malware, phishing, or other scams.

Here’s an overview of how ChatGPT works, what the possible risks are to your privacy and identity, and how you can stay protected.

How ChatGPT works

Created by a company called OpenAI, ChatGPT allows you to ask questions and have conversations with a chatbot, and have the bot produce new content for you. Thanks to its use of advanced AI, the bot’s answers are astonishingly natural-sounding, and the content it generates is more human-like than that of any chat tool previously invented. You can ask the bot to do things like explain topics, offer advice, assist you with research, or compose a vast range of materials including emails, text messages, school essays, résumés, computer code, or even songs, poems, and jokes. (Quality may vary.)

On the plus side, this can all be useful, convenient, and even fun, as long as you factor in the potential for serious issues like plagiarism, misinformation, and false answers. On the negative side, bad actors are already seeking to leverage ChatGPT’s human-like qualities as a brand-new way to deceive people.

The developers of ChatGPT have fortified the platform with security guardrails in an attempt to limit cybercriminals from exploiting it. But as is often the case, the scammers may have found a workaround.

The privacy and identity risks of AI chat platforms like ChatGPT

Security experts at ZeroFox recently observed the discussion of a cloned, unauthorized, open-source version of ChatGPT, one that doesn’t have any of the safeguards of the authorized version. Various bad actors are claiming that this cloned version could be a valuable tool for cybercrime—making it easier for scammers to, for example, create and send an authentic-looking email as a front for a phishing attack.

Here’s how that might work: The chatbot can help cybercriminals easily impersonate the communication style of a specific person, group, or company, allowing them to send emails or text messages that read as ultra-authentic and trustworthy. This could trick more people into clicking malicious links, and could aid criminals in efforts to commit identity fraud. Couple this with the fact that the bot can also help the scammers easily produce code to develop and send fake messages, and the formula for misuse is there.

Law enforcement agencies and others are sounding the alarm. The EU police force Europol has issued a warning about the potential for ChatGPT to be exploited by cybercriminals. According to a Reuters article about the Europol alert, the bot’s skill at mimicry “could be used by criminals to target victims ... Criminals with little technical knowledge could turn to ChatGPT to produce malicious code.”

Even OpenAI’s own CEO has expressed concerns about the risks of AI-driven platforms, especially given that they can generate their own computer code. These platforms “could be used for offensive cyberattacks,” he said in an interview.

How to protect against AI-driven fraud risks

The rise of AI means that it’s going to keep getting more and more difficult to determine whether emails, text messages, social media posts, and other communications are legitimate or not. It’s also going to keep getting easier for criminals to impersonate others and commit identity fraud.

In response, be sure to diligently follow best practices regarding your privacy and identity. To name a few: Use strong passwords and regularly change them, enable multi-factor authentication for account logins, and look very carefully at message senders and their links before responding or clicking.

For optimal protection against current risks and emerging ones posed by fast-evolving technologies like AI chat, consider a comprehensive plan like the Complete Plan from IDX. It features a full range of advanced tools that help boost your privacy and guard your identity; it also limits the damage and assists in recovery should your identity be stolen.

The plan includes $1 million in identity theft insurance and a 100% identity recovery money-back guarantee. And it offers access to IDX’s dedicated, expert care team—humans, not bots—who will work to make sure your identity and reputation are fully restored.

While you might be using ChatGPT to help write a résumé or have a few laughs, cybercriminals are looking at it as a potential new weapon in their fraud arsenal. Stay ahead of them by enhancing your privacy and identity protection.

About IDX

We're your proven partner in digital privacy protection with our evolving suite of privacy and identity products.