We’ve all done it. Struck up a conversation with ChatGPT (or Grok, Gemini, or Claude) like it’s an old friend, a therapist, or a super-smart assistant. As Nicole Nguyen points out in her recent Wall Street Journal piece, “The Five Things You Shouldn’t Tell ChatGPT,” these AI tools learn a lot about us from these chats. They remember our preferences, our problems, even intimate details we might later regret sharing.

While this personalization makes chatbots incredibly useful – analyzing data, debugging code, drafting emails – it comes with significant privacy risks. AI companies use our conversations to train their models, and as Nguyen highlights, this data isn’t always secure. Bugs have exposed chat snippets, emails have gone astray revealing personal info, and there’s always the risk of data breaches or warrants compelling companies to hand over chat logs.

Even the AI companies themselves issue warnings. OpenAI asks users not to share sensitive information, and Google reminds Gemini users not to input confidential data. So, what exactly should stay out of the chat window? Based on Nguyen’s reporting and advice from privacy experts, here are five categories of information you should absolutely avoid sharing with standard consumer chatbots:

1. Your Personal Identity Information

This seems obvious, but in the flow of conversation, it’s easy to let details slip. Avoid sharing:

  • Social Security numbers
  • Driver’s license or passport numbers
  • Your full date of birth
  • Home addresses
  • Personal phone numbers

While some chatbots attempt to redact this information, it’s not foolproof. As Stanford AI expert Jennifer King told the WSJ, once you type something in, “you lose possession of it.” Don’t risk your core identity details ending up in training data or a leaked database.

2. Specific Medical Results and History

Tempted to upload lab results for analysis or ask about a sensitive health condition? Think twice. Chatbots aren’t covered by the same strict privacy regulations (like HIPAA in the US) that protect your health information when dealing with doctors. Sharing identifiable medical data could lead to embarrassment or even discrimination if exposed. If you must ask about test results, follow King’s advice: redact all personal information first, sharing only the necessary data points.

3. Financial Account Details

Guard your financial information fiercely. Never share:

  • Bank account numbers
  • Investment account numbers
  • Credit card details (unless directly for a subscription payment through a secure portal)

Exposing this information, even accidentally, could put your funds at risk.

4. Proprietary Company Information

Using ChatGPT for work? Be extremely careful. Sharing client data, internal strategies, non-public financial results, or unpublished source code can have disastrous consequences. Nguyen reminds us of the Samsung incident where engineers leaked sensitive code to ChatGPT, leading to a company-wide ban. If AI tools are essential for your job, push for your company to adopt an enterprise-grade AI solution with built-in privacy protections, or use a custom, secured internal AI. Don’t use the public versions for sensitive work tasks.

5. Logins, Passwords, and Security Credentials

With AI agents emerging that promise to perform tasks for you, the temptation to share login credentials might grow. Resist it. Chatbots are not secure digital vaults like dedicated password managers. Keep your usernames, passwords, PINs, and answers to security questions far away from your chat prompts.

How to Chat More Safely

Beyond avoiding these five topics, Nguyen shares tips for enhancing your privacy:

  • Check Your Settings: Most major chatbots (ChatGPT, Gemini, Copilot) use chats for training by default, but offer an opt-out in their settings. Use it. (Anthropic’s Claude currently doesn’t train on user data by default).
  • Delete Regularly: Get into the habit of deleting your chat history. Most companies purge “deleted” data within about 30 days (though check specific policies – some, like DeepSeek, may retain data longer).
  • Use Temporary/Incognito Modes: ChatGPT’s “Temporary Chat” prevents conversations from being saved to history or used for training, similar to a browser’s incognito mode.
  • Consider Anonymizers: Services like DuckDuckGo’s Duck.ai aim to anonymize your prompts before sending them to models like GPT or Claude.

The Bottom Line

Chatbots are powerful tools designed to be conversational and helpful. But remember, they are not truly private confidantes. As the WSJ article underscores, the responsibility for protecting sensitive information lies with you, the user. Think carefully before you type, hold back the secrets, and make use of the privacy settings available.

For more visit: https://www.wsj.com/tech/personal-tech/chatgpt-tips-privacy-987099b4

What would make this website better?

0 / 400