logo
Think before you ask: Why ChatGPT legal queries can be used against you as court evidence

Think before you ask: Why ChatGPT legal queries can be used against you as court evidence

Time of India3 days ago
ChatGPT may be quick and convenient, but using it for legal questions could backfire in serious ways. Many users aren't aware that anything they type into the chatbot, even deleted messages, can be retained and used as evidence in legal proceedings.
Unlike lawyers, AI tools are not bound by confidentiality or ethical obligations. This means that sharing sensitive legal concerns with a chatbot doesn't just offer unreliable advice; it may also create a discoverable digital trail. Before you confide in AI, it is important to understand the risks and why human legal counsel is still essential.
Your ChatGPT conversations are not legally confidential
In a recent appearance on the
This Past Weekend
podcast hosted by comedian Theo Von, OpenAI CEO Sam Altman made a candid admission: conversations with ChatGPT are not protected under any kind of legal privilege.
"Right now, if you talk to a therapist or a lawyer or a doctor, there's legal privilege for it," Altman explained. "There's doctor-patient confidentiality, there's legal confidentiality. And we haven't figured that out yet for when you talk to ChatGPT.
" This means if you type out a sensitive legal scenario, say, describing an incident that might amount to a crime or seeking strategic legal advice, that chat can potentially be disclosed in court.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Libas Purple Days Sale
Libas
Undo
According to Altman, OpenAI could be legally compelled to hand over your conversations, even if they've been deleted.
The consequences of this are serious. Legal experts like Jessee Bundy from Creative Counsel have warned users not to mistake AI for actual legal representation. "If you're pasting in contracts, asking legal questions, or asking [the chatbot] for strategy, you're not getting legal advice," Jessee E.
Bundy
posted on X (formerly Twitter). "You're generating discoverable evidence. No attorney-client privilege. No confidentiality. No ethical duty. No one to protect you." She added that ChatGPT may feel private and helpful, but unlike a licensed attorney, it has no legal obligation to act in your best interest, and it can't be held accountable for any incorrect advice it generates.
AI-Generated legal advice isn't actually legal advice
When Malte Landwehr, CEO of an AI company,
suggested
that ChatGPT could still provide useful legal input even if it's not confidential, Bundy strongly pushed back.
'ChatGPT can't give you legal advice,' she replied. 'Legal advice comes from a licensed professional who understands your specific facts, goals, risks, and jurisdiction. And is accountable for it. ChatGPT is a language model. It generates words that sound right based on patterns, but it doesn't know your situation, and it's not responsible if it's wrong.'
Calling
it 'legal Mad Libs,' Bundy stressed that relying on ChatGPT for legal issues is both risky and potentially self-incriminating.
Deleted chats with AI aren't safe from legal scrutiny
User conversations with AI chatbots, including those that have been deleted, may still be stored and subject to disclosure in legal proceedings. As highlighted by ongoing litigation, some companies are required to retain chat records, which could be subpoenaed in court. This includes potentially sensitive or personal exchanges.
At present, there is no legal obligation for AI platforms to treat user chats as confidential in the same way communications with a lawyer or therapist are protected.
Until laws are updated to account for AI interactions, users should be aware that anything typed into a chatbot could, in some cases, be used as evidence.
Why it's best to speak with a human lawyer instead of ChatGPT
For legal concerns, whether it's about a contract, criminal matter, or a rights dispute, it's essential to consult a licensed professional. Unlike AI, lawyers are bound by strict confidentiality, legal privilege, and ethical duties. AI-generated responses may feel private and helpful, but they are not protected, verified, or accountable.
While it may be tempting to turn to AI for convenience, doing so for legal issues could expose you to unnecessary risk.
As artificial intelligence becomes more common in everyday use, it's important to recognise its limitations, especially in areas involving legal or personal stakes. Conversations with AI are not protected under legal privilege, and in the eyes of the law, they can be accessed like any other form of communication. Until privacy and legal frameworks are in place for AI, it's safest to avoid using chatbots for legal questions.
For advice you can trust and that will remain confidential, always consult a qualified legal professional.
Also Read:
Microsoft reveals AI chatbots are rapidly impacting 40 jobs like writers, translators and more; is yours on the list
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Turning trash into energy
Turning trash into energy

Time of India

time25 minutes ago

  • Time of India

Turning trash into energy

S tanding beside mountains of metal scrap, Guru Punghavan believes India's energy security needs participation and research in battery technologies by companies of all sizes. He founded Boson Cell, which manufactures lithium-ion battery cells, using a portion of recycled materials. His father has helmed an aluminum die-casting MSME serving the automobiles and telecommunications sector for 25 years. When Guru took up management, he opted for forward-integration by making aluminium alloys. That's when he noticed the large volume of electronic waste being collected even by a metal recycling company. "This was a niche market which many people did not focus on. So we ventured into refining critical materials from electronic waste," he says. Starting with battery recycling, Boson is focused on building lithium-ion cells for drones, EVs and energy storage applications (ESS). It grinds, separates and purifies critical minerals such as lithium, cobalt, nickel and graphite and converts them into battery grade chemicals. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Why This $30 Non-Invasive Glucose Monitor Is Flying Off the Shelves ZAKDAVI Undo Having studied mechatronics and conducted projects in electrochemistry in college helped. "We started these research projects in 2019 to develop refining technologies. Initially, we sold cathode and anode materials, then we focussed on battery cells, as each cell maker has unique needs, and there were limited opportunities," he said. The tech is well understood, but it is hard to execute and is also capital intensive, he adds. You Can Also Check: Chennai AQI | Weather in Chennai | Bank Holidays in Chennai | Public Holidays in Chennai The startup has developed proprietary technology for refining and processing these materials, which make up around 35% weight of the battery cell. The trading of these materials drew intense scrutiny in the aftermath of recent geopolitical shifts, including China's export controls on battery and refining technologies that took effect in January. Currently India imports more than 80% of its cells and it predominantly comes from China. Boson Cell has built a facility for recovering critical minerals through a process known as black mass generation at Sriperumbudur, with an initial capacity of 0.8 gigawatt hours. "Currently, this serves up to 30% of our inputs, enabling us to focus on the primary business of building cells, but we plan to expand this capacity in the future," Guru Punghavan says. He also worked with South Korean companies to establish manufacturing lines, SoPs and clean rooms for cell manufacturing. Boson Cell has developed two cells for drones with nickel manganese cobalt (NMC) chemistry and is in the process of building four more cells and exploring different cell chemistries, including lithium iron phosphate (LFP). "There are discussions around curbs on rare earth material exports from China. While it does not affect us now, it has highlighted the risks in the battery supply chain. Recycling of e-waste and refining capabilities for lithium, cobalt and nickel are in nascent stage in India and it affects our cell production," he says. The company has raised $1.5 million from friends and family and plans to raise more capital for expanding its production capacity, specifically targeting EV cells. The startup has partnered with drone OEMs and is in talks with electric two and three wheeler OEMs for pilot projects. It is also expecting financial support from the Centre under the Fame II programme. Guru Punghavan believes the govt's targeted support for smaller battery cell manufactures could boost the ecosystem. The startup aims to build a one lakh cell a day production line by the next fiscal and it hopes to source 100% of its materials for the cells from inhouse recycling. The startup is betting on providing faster turnaround times and lower minimum order quantities to gain market share. Get the latest lifestyle updates on Times of India, along with Friendship Day wishes , messages and quotes !

Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'
Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'

NDTV

time26 minutes ago

  • NDTV

Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'

Geoffrey Hinton, regarded by many as the 'godfather of artificial intelligence' (AI), has warned that the technology could get out of hand if chatbots manage to develop their language. Currently, AI does its thinking in English, allowing developers to track what the technology is thinking, but there could come a point where humans might not understand what AI is planning to do, as per Mr Hinton. "Now it gets more scary if they develop their own internal languages for talking to each other," he said on an episode of the "One Decision" podcast that aired last month. "I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking." Mr Hinton added that AI has already demonstrated that it can think terrible thoughts, and it is not unthinkable that the machines could eventually think in ways that humans cannot track or interpret. Warning about AI Mr Hinton laid the foundations for machine learning that is powering today's AI-based products and applications. However, the Nobel laureate grew wary of AI's future development and cut ties with his employer, Google, in order to speak more freely on the issue. "It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us," said Mr Hinton at the time. "I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control." Mr Hinton has been a big advocate of government regulation for the technology, especially given the unprecedented pace of development. His warning also comes in the backdrop of repeated instances of AI chatbots hallucinating thoughts. In April, OpenAI's internal tests revealed that its o3 and o4-mini AI models were hallucinating or making things up much more frequently than even the non-reasoning models, such as GPT-4o. The company said it did not have any idea why this was happening. In a technical report, OpenAI said, "more research is needed" to understand why hallucinations are getting worse as it scales up its reasoning models.

Anthropic CEO throws shade at Mark Zuckerberg's billion-dollar AI talent hunt with dartboard dig: ‘You can't buy purpose with a paycheck'
Anthropic CEO throws shade at Mark Zuckerberg's billion-dollar AI talent hunt with dartboard dig: ‘You can't buy purpose with a paycheck'

Time of India

time40 minutes ago

  • Time of India

Anthropic CEO throws shade at Mark Zuckerberg's billion-dollar AI talent hunt with dartboard dig: ‘You can't buy purpose with a paycheck'

Culture Over Cash You Might Also Like: Billionaire Vinod Khosla predicts AI teachers will disrupt education and careers. Here's how — BigTechPod (@BigTechPod) The AI Hiring Wars: A Battle for Brains Buying Purpose? Not Quite, Says Amodei In the escalating turf war for top AI talent, Anthropic CEO Dario Amodei has delivered a pointed, and slightly humorous, critique of Meta 's aggressive recruitment tactics. Speaking on the Big Technology Podcast, Amodei painted a vivid picture: "If Mark Zuckerberg throws a dart at a dartboard and it hits your name, that doesn't mean you should be paid ten times more than the guy next to you who's just as skilled."His remarks come amid widespread reports of Meta launching an all-out offensive to poach AI engineers from rivals like OpenAI, Apple, Google, and Anthropic itself. Yet Amodei claims his startup has remained largely untouched. 'Some [employees] wouldn't even talk to Meta,' he said, asserting that their culture and mission are more attractive than any compensation package Meta can has reportedly been dangling massive offers, with some packages surpassing $200 million for a single hire, according to Business Insider and WIRED. Amodei, however, says Anthropic refuses to match such sums, insisting on fair and consistent pay across the board."I recently posted in our company Slack that we will not compromise our compensation principles or fairness if someone gets a big offer," he shared. In his view, rewarding one employee disproportionately just because they were on Meta's radar would be unjust to their equally capable this stance, Meta has managed to lure away at least one former Anthropic engineer—Joel Pobar—but Amodei suggests their broader impact has been latest AI moonshot, the Superintelligence Lab , has ignited a fierce scramble for elite minds. OpenAI's Chief Research Officer Mark Chen likened it to a break-in after losing several staffers overnight. Meanwhile, OpenAI CEO Sam Altman accused Meta of deploying 'giant offers' to lure talent, with some signing bonuses rumored to top $100 is unapologetic about the ambition. In an internal memo seen by CNBC, he claimed, 'Developing superintelligence is coming into sight,' declaring his goal to bring personal AI to every individual, not just enterprise Meta may have the resources, Amodei questions whether mission-driven AI work can be bought. 'Zuckerberg is trying to buy something that can't be bought,' he said during the podcast, underscoring Anthropic's long-term focus on safe and ethical AI sentiment resonates with other industry leaders too. OpenAI continues to frame itself as a purpose-first organization, while Meta's flashier, big-money moves risk creating tension even within its own teams. As CNBC reported, some insiders at Meta worry that a talent-heavy, cash-fueled approach could lead to ego clashes and fractured the current AI landscape, where demand far outpaces supply, the value of a skilled AI researcher is rivaling that of a professional athlete. Yet, for companies like Anthropic and OpenAI, the real challenge isn't just retaining talent—it's maintaining a sense of purpose amid the frenzy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store