7 days ago
"ChatGPT is not a diary, therapist, lawyer, or friend": LinkedIn user warns against oversharing everything with AI
ChatGPT users are being warned to think twice before typing anything personal into the chatbot. OpenAI CEO Sam Altman recently confirmed that interactions with ChatGPT aren't protected by confidentiality laws. Conversations you assume are private may be stored, reviewed, and even presented in court — no matter how sensitive, emotional or casual they seem.'If you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,' Altman said in an interview on the This Past Weekend podcast. He added, 'We should have, like, the same concept of privacy for your conversations with AI that we do with a therapist or whatever.'But as of now, that legal framework doesn't explained, 'Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's confidentiality. We haven't figured that out yet for ChatGPT.'This sharp warning is echoed by Shreya Jaiswal, a Chartered Accountant and founder of Fawkes Solutions, who posted her concerns on LinkedIn. Her message was blunt and alarming.
'ChatGPT can land you in jail. No, seriously. Not even joking,' she to Jaiswal, Altman's own words spell out the legal dangers. 'Sam Altman – the CEO of OpenAI, literally said that anything you type into ChatGPT can be used as evidence in court. Not just now, even months or years later, if needed. There's no privacy, no protection, nothing, unlike talking to a real lawyer or therapist who is sworn to client confidentiality.'She laid out a few scenarios that, while hypothetical, are disturbingly someone types: 'I cheated on my partner and I feel guilty, is it me or the stars that are misaligned?' Jaiswal pointed out how this could resurface in a family court battle. 'Boom. You're in court 2 years later fighting an alimony or custody battle. That chat shows up. And your 'private guilt trip' just became public proof.'
Even seemingly harmless curiosity can be risky. 'How do I save taxes using all the loopholes in the Income Tax Act?' or 'How can I use bank loans to become rich like Vijay Mallya?' could be interpreted as intent during a future audit or legal probe. 'During a tax audit or loan default, this could easily be used as evidence of intent even if you never actually did anything wrong,' she warned. In another example, she highlighted workplace risk. 'I'm thinking of quitting and starting my own company. How can I use my current company to learn for my startup?' This, she argued, could be used against you in a lawsuit for breach of contract or intellectual property theft. 'You don't even need to have done anything. The fact that you thought about it is enough.'Jaiswal expressed concern that people have become too casual, even intimate, with AI tools. 'We've all gotten way too comfortable with AI. People are treating ChatGPT like a diary. Like a best friend. Like a therapist. Like a co-founder.''But it's none of those. It's not on your side, it's not protecting you. And legally, it doesn't owe you anything.'She closed her post with a simple piece of advice: 'Let me make this simple – if you wouldn't say it in front of a judge, don't type it into ChatGPT.'And her final thought was one that many might relate to: 'I'm honestly scared. Not because I have used ChatGPT for something I shouldn't have. But because we've moved too fast, and asked too few questions, and continue to do so in the world of AI.'These concerns aren't just theory. In a 2024 bankruptcy case in the United States, a lawyer submitted a legal brief that cited fake court cases generated by ChatGPT. The judge imposed a fine of $5,500 and ordered the lawyer to attend an AI ethics session. — slow_developer (@slow_developer)
Similar disciplinary actions were taken against lawyers in Utah and Alabama who relied on fabricated AI-generated incidents have underscored a critical truth: AI cannot replace verified legal research or professional advice. It can mislead, misrepresent, or completely fabricate information — what researchers call "AI hallucinations".Altman also flagged a worrying trend among younger users. Speaking at a Federal Reserve conference, he said, 'There are young people who say, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. I'm going to do whatever it says.' That feels really bad to me.'He's concerned that blind faith in AI could be eroding people's ability to think critically. While ChatGPT is programmed to provide helpful answers, Altman stressed it lacks context, responsibility, and real emotional advice is straightforward, and it applies to everyone: Don't use ChatGPT to confess anything sensitive, illegal or personal
Never treat it as a lawyer, therapist, or financial advisor
Verify any factual claims independently
Use AI to brainstorm, not to confess And most importantly, don't say anything to a chatbot that you wouldn't be comfortable seeing in court
While OpenAI claims that user chats are reviewed for safety and model training, Altman admitted that conversations may be retained if required by law. Even if you delete a conversation, legal demands can override those actions. With ongoing lawsuits, including one from The New York Times, OpenAI may soon have to store conversations indefinitely. For those looking for more privacy, Altman suggested considering open-source models that can run offline, like GPT4All by Nomic AI or Ollama. But he stressed that what's needed most is a clear legal framework.'I think we will certainly need a legal or a policy framework for AI,' he then, treat your chats with caution. Because what you type could follow you — even years later.