5 minute read

The More You Reveal to AI, the Greater the Privacy Risk

Summary:
AI chatbots like ChatGPT, Claude, Gemini, and Copilot can be great tools for improving your daily life. But when you share sensitive personal information with them, your privacy may be at stake. Here are the main risks, and what you can do to better protect your privacy.

What do chatbots know about you—and what are they doing with that info?

The More You Reveal to AI, the Greater the Privacy Risk

AI is becoming more and more integrated into people’s daily lives. But as with any rapidly evolving technology, it carries both benefits and risks. AI can help with everything from sparking creativity to saving time on routine tasks. On the other hand, it can be manipulated or misused by bad actors or companies that don’t necessarily have your best interests in mind.

You may already know that AI deepfake technology is being exploited by scammers to trick people into giving up money or personal information, including through romance scams and fraudulent impersonations of people like celebrities and law enforcement officers.

But you don’t need to be scammed by cybercriminals in order to experience unwanted effects from AI. Experts are sounding the alarm about AI’s overall privacy risks, particularly regarding how much AI tech companies know about you—and what’s being done with that information.

Some of these concerns involve AI-enabled devices like smart doorbell cameras and Meta AI smart glasses, both of which can intrude on people’s privacy. But the bigger issues are related to AI chatbots—including Anthropic’s Claude, OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini—and the large language models (LLMs) that underpin them.

The privacy risks involved with AI

Here are just a few reasons why you should be cautious when interacting with AI chatbots:

AI models are susceptible to data breaches and leaks. Because AI-related data is so valuable, and because the models are so complex, the companies behind AI are particularly vulnerable to data breaches, leaks of software code, and other security issues. Any personal data stored in an AI system carries this risk. Take the example of Anthropic: On two recent occasions the company accidentally leaked sensitive materials including the source code behind its Claude AI chatbot. While Anthropic says that no customer personal information was involved, the leak could give bad actors an opportunity to use the source code for malicious purposes.

Your chats are being used to train the model. A Stanford University study revealed that the major AI tech companies use your chats for training purposes unless you request otherwise. The things you tell an AI chatbot—as well as files you upload to the system—are fed into the model and are used to help it become “smarter,” training it on how to interact with others in the future. According to the Stanford researchers, some AI companies even employ humans to review chat transcripts for training purposes.

Your activity from other platforms can be pulled in by chatbots. If, for example, you’re interacting with Google’s Gemini, know that it can leverage your activity and history from other Google products like Gmail and YouTube. The company has stated that this helps make your chatbot experience more personalized. But keep in mind that this cross-platform information is fair game for training the model.

A chatbot might share information in unintended ways. A chatbot can “remember” you, meaning that when you start a new chat, you don’t have to repeat details you provided earlier. While this is convenient, there could be unintended consequences. Information you share about one area of your life can sometimes seep into a separate, unrelated area. For example, according to Technology Review, “a casual chat about dietary preferences to build a grocery list could later influence what health insurance options are offered.” Chatbots often link you to external apps or sites to complete certain tasks or recommendations—and you might not want a third party to have this information.

Sharing sensitive health data is risky. If, say, you upload a document containing private medical information and ask the chatbot to analyze it for you, there’s no guarantee this information will not wind up in the hands of a third party. While the Health Insurance Portability and Accountability Act (HIPAA) protects most information shared between patients and medical providers, HIPAA does not apply to chatbots like ChatGPT, according to the HIPAA Journal.

      How to better protect your privacy when using AI

      In the U.S., there is no overarching federal privacy law protecting the content of AI chats. State laws, meanwhile, vary widely. This means much of the burden falls on you in terms of guarding your personal information. Here are some tips for doing so.

      Don’t get enticed into revealing too much. Because AI chatbots are designed to communicate through natural language, it’s easy to get a sense that you’re chatting casually with a close friend or confidant. Realize, though, that you’re actually providing personal data to a company whose main priority is making a profit. Before you share any sensitive information, ask yourself a simple question: Would I be comfortable sharing this information with a stranger in public?

      If possible, use the chatbots without signing in. Most of the major AI platforms allow you to use a basic version of their chatbot without creating an account or signing in. In many cases you’ll be limited in terms of the features you can access, but if you just have a basic question to ask, it can be enough. And the model won’t be able to tie your query to the kind of personal information you’d typically store in an account.

      Don’t allow your conversations to be used for training. If you have an account and choose to log in, you can opt out of having your chat data become part of the model’s training. Here’s how to do it for ChatGPT, Copilot, and Claude. For Gemini, log in to your Google account, go to the My Activity page, and look for “Gemini Apps Activity.”

      Choose stricter privacy settings. Some chatbots provide additional options for enhanced privacy. Claude, for example, offers incognito chats which are neither saved to your chat history nor kept in Claude’s memory. Similarly, ChatGPT offers temporary chats that won’t appear in your history and won’t be retained by the system.

      Delete old chats. If you’re having second thoughts about things you’ve previously shared with a chatbot, you can go back and delete those chats. Each chatbot provider offers instructions on how to do it. Here’s how to permanently delete conversation threads in ChatGPT, Claude, Copilot, and Gemini.

      Tech companies consider your personal data to be their most valuable commodity. In the absence of stronger regulation about what kind of information AI chatbots can collect and what they do with it, your best bet is to avoid giving them the kind of personal details you’d rather keep private.

      About IDX

      We're your proven partner in digital privacy protection with our evolving suite of privacy and identity products.