The One the Thing ChatGPT Boss Says You Should NEVER Tell AI
ChatGPT is a powerful and flexible AI chatbot. It can answer complex questions and even discuss anything you want with it. However, OpenAI CEO Sam Altman has issued a stark warning to users who share sensitive information with the chatbot: your chats are not legally protected and could be used as evidence against you during lawsuits.
Why You Shouldn’t Share Sensitive Information with AI
OpenAI’s ChatGPT has gained popularity, with some users turning to the chatbot as a therapist or life coach for sharing personal life details and receiving relevant advice. Although these chats might seem private, the company’s CEO, Sam Altman, noted in a recent podcast interview that your conversations lack legal privacy protections.
“I think we will certainly need a legal or a policy framework for AI,” Altman responded to podcaster Theo Von’s question. Altman continued, “So, if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that, and I think that’s very screwed up.”
The CEO highlighted that, unlike conversations with a real-life therapist, lawyer, or doctor, which are protected by privilege, interactions with ChatGPT don’t enjoy the same legal safeguard. This means that OpenAI can be forced to disclose your chat records if required by law.
AI Chatbots Are Still Not Covered by Legal Protection
Altman said, “We should have, like, the same concept of privacy for your conversations with AI that we do with a therapist or whatever.”
He added that the lack of specific privacy protection for AI has only recently come into the spotlight, and that this issue needs to be addressed immediately.
Conversations with ChatGPT are not typically end-to-end encrypted, and OpenAI’s policy allows them to view your chats for the purposes of safety monitoring and training the AI model.
Although users can delete their conversations, and OpenAI typically deletes these permanently after 30 days based on the company’s data retention policy, an ongoing lawsuit by The New York Times and other news publications is now forcing them to retain all records indefinitely.
Instead, Altman suggested that users should have a clear understanding of the fair use privacy policy if they plan to use AI extensively. Alternatively, there are workarounds to ensure more secure and private use of AI, such as running similar models offline, like GPT4All by Nomic AI and Ollama.
Do you use ChatGPT a lot? What measures do you suggest to keep your chats safer? We want to hear your thoughts.