logo
#

Latest news with #userprivacy

ChatGPT conversations could be shared with court
ChatGPT conversations could be shared with court

Russia Today

time3 days ago

  • Business
  • Russia Today

ChatGPT conversations could be shared with court

The tech industry has yet to resolve how to protect user privacy in sensitive interactions with AI, CEO of industry leader OpenAI Sam Altman has admitted. Current systems lack adequate safeguards for confidential conversations, he warned, amid a surge in the use of AI chatbots by millions of users – including children – for therapy and emotional support. Speaking to the This Past Weekend podcast published last week, Altman said users should not expect legal confidentiality when using ChatGPT, while he cited the absence of a legal or policy framework governing AI. 'People talk about the most personal sh** in their lives to ChatGPT,' he said. Many AI users – particularly young people – treat the chatbot like a therapist or life coach for advice on relationship and emotional issues, Altman revealed. However unlike conversations with lawyers or therapists, which are protected by legal privilege or confidentiality, no such protections currently exist for interactions with AI. 'We haven't figured that out yet for when you talk to ChatGPT,' he added. Altman said the issue of confidentiality and privacy in AI interactions needs urgent attention. 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,' he said. OpenAI claims it deletes free-tier ChatGPT conversations after 30 days, however, some chats could be stored for legal or security reasons. The company is facing a lawsuit from The New York Times over alleged copyright infringement over the use of Times articles in training its AI models. The case has compelled OpenAI to preserve user conversations from millions of ChatGPT users, barring those by enterprise clients, an order the company has appealed, citing 'overreach.' Latest research has found that ChatGPT has been linked to psychosis in some users. According to researchers, concerns are growing that AI chatbots could exacerbate psychiatric conditions as they are increasingly used in personal and emotional contexts.

Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist
Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist

Yahoo

time6 days ago

  • Business
  • Yahoo

Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist

ChatGPT users may want to think twice before turning to their AI app for therapy or other kinds of emotional support. According to OpenAI CEO Sam Altman, the AI industry hasn't yet figured out how to protect user privacy when it comes to these more sensitive conversations, because there's no doctor-patient confidentiality when your doc is an AI. The exec made these comments on a recent episode of Theo Von's podcast, This Past Weekend w/ Theo Von. In response to a question about how AI works with today's legal system, Altman said one of the problems of not yet having a legal or policy framework for AI is that there's no legal confidentiality for users' conversations. 'People talk about the most personal sh** in their lives to ChatGPT,' Altman said. 'People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.' This could create a privacy concern for users in the case of a lawsuit, Altman added, because OpenAI would be legally required to produce those conversations today. 'I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago,' Altman said. The company understands that the lack of privacy could be a blocker to broader user adoption. In addition to AI's demand for so much online data during the training period, it's being asked to produce data from users' chats in some legal contexts. Already, OpenAI has been fighting a court order in its lawsuit with The New York Times, which would require it to save the chats of hundreds of millions of ChatGPT users globally, excluding those from ChatGPT Enterprise customers. In a statement on its website, OpenAI said it's appealing this order, which it called 'an overreach.' If the court could override OpenAI's own decisions around data privacy, it could open the company up to further demand for legal discovery or law enforcement purposes. Today's tech companies are regularly subpoenaed for user data in order to aid in criminal prosecutions. But in more recent years, there have been additional concerns about digital data as laws began limiting access to previously established freedoms, like a woman's right to choose. When the Supreme Court overturned Roe v. Wade, for example, customers began switching to more private period-tracking apps or to Apple Health, which encrypted their records. Altman asked the podcast host about his own ChatGPT usage, as well, given that Von said he didn't talk to the AI chatbot much due to his own privacy concerns. 'I think it makes sense … to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity,' Altman said.

Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist
Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist

TechCrunch

time6 days ago

  • Business
  • TechCrunch

Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist

ChatGPT users may want to think twice before turning to their AI app for therapy or other kinds of emotional support. According to OpenAI CEO Sam Altman, the AI industry hasn't yet figured out how to protect user privacy when it comes to these more sensitive conversations, because there's no doctor-patient confidentiality when your doc is an AI. The exec made these comments on a recent episode of Theo Von's podcast, This Past Weekend w/ Theo Von. In response to a question about how AI works with today's legal system, Altman said one of the problems of not yet having a legal or policy framework for AI is that there's no legal confidentiality for users' conversations. 'People talk about the most personal sh** in their lives to ChatGPT,' Altman said. 'People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.' This could create a privacy concern for users in the case of a lawsuit, Altman added, because OpenAI would be legally required to produce those conversations today. 'I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago,' Altman said. The company understands that the lack of privacy could be a blocker to broader user adoption. In addition to AI's demand for so much online data during the training period, it's being asked to produce data from users' chats in some legal contexts. Already, OpenAI has been fighting a court order in its lawsuit with The New York Times, which would require it to save the chats of hundreds of millions of ChatGPT users globally, excluding those from ChatGPT Enterprise customers. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW In a statement on its website, OpenAI said it's appealing this order, which it called 'an overreach.' If the court could override OpenAI's own decisions around data privacy, it could open the company up to further demand for legal discovery or law enforcement purposes. Today's tech companies are regularly subpoenaed for user data in order to aid in criminal prosecutions. But in more recent years, there have been additional concerns about digital data as laws began limiting access to previously established freedoms, like a woman's right to choose. When the Supreme Court overturned Roe v. Wade, for example, customers began switching to more private period-tracking apps or to Apple Health, which encrypted their records. Altman asked the podcast host about his own ChatGPT usage, as well, given that Von said he didn't talk to the AI chatbot much due to his own privacy concerns. 'I think it makes sense … to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity,' Altman said.

Meta shareholders vs Mark Zuckerberg in $8 billion lawsuit
Meta shareholders vs Mark Zuckerberg in $8 billion lawsuit

ABC News

time16-07-2025

  • Business
  • ABC News

Meta shareholders vs Mark Zuckerberg in $8 billion lawsuit

Meta has been accused of harvesting user data without consent in a multi-billion-dollar lawsuit by company shareholders against chief executive Mark Zuckerberg. The case dates back to a 2018 scandal which saw the data of millions of Facebook users accessed by a now-defunct political consulting firm. The firm, Cambridge Analytica, worked for Donald Trump's 2016 presidential campaign. Now Meta shareholders are suing Mr Zuckerberg and several current and former company executives, claiming they violated a 2012 agreement to protect user data. They want Mr Zuckerberg and his co-defendants to reimburse the company for more than $US8 billion ($12.2 billion) in fines and other costs Meta paid following the controversy. Mr Zuckerberg has dismissed the allegations in court filings as "extreme claims". Jeannie Paterson, who specialises in consumer protection and AI regulation, said the lawsuit was "unusual". "This is an action by some minority shareholders against the company they hold shares on, and they're saying that the bad behaviour of the company … would have caused them loss, for which they should be compensated for by the directors," Professor Paterson, from University of Melbourne, said. "That is an astounding action and something quite new in this area." How a scandal allegedly sold the personal data of 300,000 Australians About a decade ago, a third-party app called This Is Your Digital Life saw the personal data of millions of Facebook users released to researchers. More than 300,000 Australians used the app. Ultimately data of tens of millions of users was allegedly handed over. The data collected was allegedly given to Cambridge Analytica, a British data analytics firm, and its parent company, Strategic Communication Laboratories — which violated Facebook's terms of service. The data was used by Cambridge Analytica to target Facebook users with political advertising during the 2016 US presidential election. The fallout has seen Facebook embroiled in court case after court case — including the one about to begin. Who's involved in the latest lawsuit? Facebook COO Sheryl Sandberg testifies before a Senate Intelligence Committee hearing on foreign influence operations on social media platforms on Capitol Hill in Washington, U.S., September 5, 2018. ( Reuters: Joshua Roberts ) Sheryl Sandberg served as chief operating officer at Meta from 2008 to August 2022, when she stepped down. When Mr Zuckerberg recruited the then-Google executive, the pair wanted Facebook to become a "global leader". After stepping down, she remained a board member, noting she had only intended to stay for five years and not the ultimate 14 years of her tenure. "I believe in this company," she said when announcing her decision to step down. "Have we gotten everything right? Absolutely not. "Have we learned and listened and grown and invested where we need to? This team has and will." This year she announced she would not stand for re-election on the Meta board. She rose to prominence in 2013 after publishing a corporate-feminist guide titled Lean In, which became a best seller. In January she was sanctioned by a Delaware judge for deleting emails relating to the Cambridge Analytica privacy scandal. Marc Andreessen Venture capital firm Andreessen Horowitz Co-Founder and General Partner Marc Andreessen speaks at the WSJD Live conference in Laguna Beach, California October 28, 2014. ( Reuters: Lucy Nicholson ) Marc Andreessen runs an influential Silicon Valley venture capital firm which has previously invested in Instagram and Oculus VR. He was a seed investor in Facebook and has served on its board of directors since 2008. Late last year, he was credited as a "key networker" at Elon Musk's Department of Government Efficiency (DOGE), according to The Washington Post. Peter Thiel Peter Thiel is a venture capitalist, tech billionaire, and co-founder of PayPal and software company Palantir. He was the first big investor in Facebook, according to Forbes, but sold most of his stake in it and left the board in 2022. He left the company to focus on politics. Mr Thiel has been described as one of the largest donors to Republican candidates during the 2022 election campaign. By the beginning of 2022, he had reportedly donated more than $US20.4 million ($31.2 million), according to The New York Times. Recently Mr Thiel rose to viral fame during a podcast interview discussing AI. Asked whether the human race "should survive", Mr Thiel hesitated long enough that the host was forced to repeat the question. He ultimately said yes. Reed Hastings Netflix CEO Reed Hastings is now worth $3.4 billion USD. Reed Hastings is the co-founder and chairman of Netflix. Since stepping down as Netflix's co-chief executive in 2023, he has slowly been reducing his shares and now owns less than 1 per cent of the company, according to Forbes. He was on Facebook's board of directors from 2011 to 2019. According to The New York Times, he and fellow board member Peter Thiel butted heads over then-US-presidential-nominee Donald Trump in 2016. He reportedly labelled endorsement of Mr Trump "catastrophically bad judgement" in emails between the pair. Mark Zuckerberg Meta Platforms CEO Mark Zuckerberg departs after attending a Federal Trade Commission trial that could force the company to unwind its acquisitions of messaging platform WhatsApp and image-sharing app Instagram, at U.S. District Court in Washington, D.C., U.S., April 15, 2025. ( Reuters: Nathan Howard TPX IMAGES OF THE DAY ) Mark Zuckerberg founded Facebook, now Meta, as a 19-year-old in 2004. The company was taken public in 2013 and Mr Zuckerberg now owns 13 per cent of its stock, according to Forbes. Between 2023 and 2024 his estimated net worth skyrocketed from $US64.4 billion ($98.3 billion) to $177 billion ($270 billion) and has continued to rise. The latest lawsuit is set to get underway in the US state of Delaware on Wednesday, local time. Professor Paterson said the case was a creative way of addressing corporate governance. It was also an action that was coming under corporation law rather than the "non-existent privacy law" in the US. But also, under the context of the US Communications Decency Act which protected platforms such as Facebook from being liable for content posted by its users. "So it's a really interesting and innovative use of director's duties, and we've seen that a little bit in Australia," she said. "So this action is now taking on platform governance as a serious director duty. So you could say the next one could be AI governance." The non-jury trial is expected to last eight days. Meta's ongoing legal dramas cost them billions Over the last few years, Meta has settled cases surrounding the Cambridge Analytica scandal. In 2022, Meta agreed to pay $US725 million to resolve a class action lawsuit over the scandal in the US. Late last year, the company agreed to a historic $50 million settlement with Australia's information commissioner over the user data scandal. One court case that is still ongoing has been brought by one of Australia's richest people, Andrew Forrest, and is related to fraudulent Facebook cryptocurrency ads. In March, it was revealed there were about 230,000 fake ads purporting to show Mr Forrest spread across the company's social platforms. "The Andrew Forrest case against Meta is also quite a novel action," Professor Paterson said. "So, in the past the [Communication] Decency Act has kind of shielded, especially digital platforms, less so tech companies, from litigation. We're starting to see perhaps the cracks in that." Could this case strengthen data protection? The origins of Meta's most recent lawsuit stem from more than a decade ago. At the end of 2011, Facebook reached a deal with the US Federal Trade Commission over allegations it had a deceptive privacy policy. It required Facebook to seek user permission before making privacy changes. David Vaile, a cyberspace legal expert at the University of New South Wales, said the agreement with the commission had been the "benchmark for weak" regulation of platforms such as Facebook. "Facebook is a rogue state in that they're the exemplar of the cult of disruption they say, and they use their motto, forgiveness, not permission," Mr Vaile said. Meta's been more aggressive than other tech companies in accessing data for AI, Mr Vaile said. In January, court documents revealed the tech company used Library Genesis (LibGen), an online trove of pirated books and academic papers, to train its generative AI language model. "They're being sued in a number of different jurisdictions for grabbing material and absorbing and regurgitating material through these generative tools that they had no right to, they had no permission for," he said. It's why Mr Vaile believes this case presents an opportunity to strengthen protections against data harvesting as tech companies continue to develop AI. "Having this litigation succeed would be a very useful disciplinary corrective. If this litigation fails, it'll be basically all bets are off on whatever they feel like doing with the AI stuff," he said. ABC/Reuters

ICEBlock isn't ‘completely anonymous'
ICEBlock isn't ‘completely anonymous'

The Verge

time15-07-2025

  • The Verge

ICEBlock isn't ‘completely anonymous'

The developer of ICEBlock, an iOS app for anonymously reporting sightings of US Immigration and Customs Enforcement (ICE) officials, promises that it 'ensures user privacy by storing no personal data.' But that claim has come under scrutiny. ICEBlock creator Joshua Aaron has been accused of making false promises regarding user anonymity and privacy, being 'misguided' about the privacy offered by iOS, and of being an Apple fanboy. The issue isn't what ICEBlock stores. It's about what it could accidentally reveal through its tight integration with iOS. Aaron released ICEBlock in early April, and it rocketed to the top of the App Store earlier this month after US Homeland Security Secretary Kristi Noem called it an 'obstruction of justice.' When calls for an Android version followed, however, the developer said it wasn't possible. 'Our application is designed to provide as much anonymity as possible without storing any user data or creating accounts,' reads part of the lengthy message. 'Achieving this level of anonymity on Android is not feasible due to the inherent requirements of push notification services.' The statement rankled some. The developers of GrapheneOS, an open-source, privacy-focused take on Android, took to BlueSky to accuse ICEBlock of 'spreading misinformation about Android' by describing it as less private than iOS. The developers said that ICEBlock ignores data kept by Apple itself and claims it 'provides complete anonymity when it doesn't.' Aaron told The Verge ICEBlock is built around a single database in iCloud. When a user taps on the map to report ICE sightings, the location data is added to that database, and users within five miles are automatically sent a push notification alerting them. Push notifications require developers to have some way of designating which devices receive them, and while Aaron declined to say precisely how the notifications function, he said alerts are sent through Apple's system, not ICEBlock's, letting him avoid keeping his own database of users or their devices. 'We utilized iCloud in kind of a creative way,' Aaron said. No security model is 100 percent safe, but in theory, ICEBlock has managed to limit the risks for people both reporting and receiving information. The Department of Homeland Security could demand information on who submitted a tip, but per Aaron's explanation, the app wouldn't have user accounts, device IDs, or IP addresses to hand over. Likewise, if ICE thinks someone used the app to find an operation and interfere, it could seek records from ICEBlock tied to who received a particular push notification — and again, it should come away empty-handed. That trick is iOS-only, though. The ICEBlock iOS app can piggyback on Apple's iCloud infrastructure to route push notifications because every iPhone user is guaranteed to have an iCloud account. Android users aren't similarly required to create Google accounts, so 'some kind of database has to be created in order to capture user information,' Aaron said. (Sharing reports across both phone platforms would create its own privacy challenges, too.) I spoke to Gaël Duval, founder and CEO of /e/OS, another privacy-focused version of Android, and he admitted that Android's push notifications require 'a registration token that uniquely identifies a given app on a given device' and that this 'would normally be saved on ICEBlock's server.' 'It's a long and random string,' he said, that doesn't include either an Android ID or the IMEI that identifies a specific phone. 'Google can still map it back to the hardware on their side, but for ICEBlock, it's pseudonymous until you link it to anything else.' So, indeed, Android notifications would require ICEBlock to store potentially identifiable information. Normally, iOS would, too, but a clever workaround lets ICEBlock avoid just that. But you might have spotted the problem: ICEBlock isn't collecting device data on iOS, but only because similar data is stored with Apple instead. Apple maintains a database of which devices and accounts have installed a given app, and Carlos Anso from GrapheneOS told me that it likely also tracks device registrations for push notifications. For either ICEBlock's iOS app or a hypothetical Android app, law enforcement could demand information directly from the company, cutting ICEBlock out of the loop. Aaron told me that he has 'no idea what Apple would store,' and it 'has nothing to do with ICEBlock.' For people who submit reports, Duval suggested that there might also be 'a residual risk' from matching report timings and telemetry data, and Anso echoed a similar worry. But without the precise details of ICEBlock's design — which Aaron is understandably reluctant to share — that's impossible to verify. 'Absolutely not,' Aaron said when I asked if it's a concern. He insisted that 'there is no risk' of Apple having data on which users have submitted reports. Aaron said ICEBlock stores essentially no data on its users on iOS right now and that he couldn't achieve the same setup on Android, a web app, or an open-source design. Critics argue he's offering a false sense of security by offloading the risk to Apple. And while it's not clear exactly what data Apple has on ICEBlock's users, it's enough to cast doubt on the claim that 'there is no data.' The question then is how safe that data is with Apple. Aaron insisted that 'nothing that Apple has would harm the user,' and he was confident that Apple wouldn't share it anyway. 'Apple has a history, that when the government tries to come after them for things, they haven't divulged that information, they've gone to court over it,' he said. 'They've fought those battles and won.' That isn't entirely true. While Apple has engaged in some high-profile privacy fights with governments and law enforcement — including efforts to get into the San Bernardino shooter's iPhone or its recent refusal to build a backdoor into iCloud encryption in the UK — it complies with the majority of government requests it receives. In its most recent transparency report, for the first half of 2024, Apple said it agreed to 86 percent of US government requests for device-based data access, 90 percent for account-based access, and 28 percent for push notification logs. Many of these will be benign — they include help tracking lost or stolen phones, for example — but others relate to cases where an 'Apple account may have been used unlawfully.' Demanding push notification data from both Apple and Google has become a key way for law enforcement to identify suspected criminals. People have a constitutional right to record public police operations and share tips about sightings. As Aaron said, an app like ICEBlock — contrary to Noem's claims — 'is in no way illegal' under current American law. But during a period where neither the president nor the Supreme Court have much regard for constitutional rights, the question isn't whether ICEBlock is legal, it's whether any information that runs through it could expose people who resist ICE, legally or not. 'We don't want anything,' Aaron said. 'I don't want a private database. I don't want any kind of information on my side at all.' And there's the rub. ICEBlock says your data is safe because it doesn't have any, but that doesn't mean it isn't out there. Do you have as much faith in Apple as Aaron does?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store