Tea's data breach shows why you should be wary of new apps — especially in the AI era
Late last month, Tea, a buzzy app where women anonymously share reviews of men, suffered a data breach that exposed thousands of images and private messages.
As cybersecurity expert Michael Coates put it, the impact of Tea's breach was that it exposed data "otherwise assumed to be private and sensitive" to anyone with the "technical acumen" to access that user data — and "ergo, the whole world."
Tea confirmed that about 72,000 images — including women's selfies and driver's licenses — had been accessed. Images from the app were then posted to 4chan, and within days, that information spread across the web on platforms like X. Someone made a map identifying users' locations, and a website where Tea users' verification selfies were ranked side-by-side.
It wasn't just images that were accessible. Kasra Rahjerdi, a security researcher, told Business Insider he was able to access more than 1.1 million private direct messages (DMs) between Tea's users. Rahjerdi said those messages included "intimate" conversations about topics such as divorce, abortion, cheating, and rape.
The Tea breach was a crude reminder that just because we assume our data is private doesn't mean it actually is — especially when it comes to new apps.
"Talking to an app is talking to a really gossipy coworker," Rahjerdi said. "If you tell them anything, they're going to share it, at least with the owners of the app, if not their advertisers, if not accidentally with the world."
Isaac Evans, CEO of cybersecurity company Semgrep, said he uncovered an issue similar to the Tea breach when he was a student at MIT. A directory of students' names and IDs was left open for the public to view.
"It's just really easy, when you have a big bucket of data, to accidentally leave it out in the open," Evans said.
But despite the risks, many people are willing to share sensitive information with new apps. In fact, even after news of the Tea data breach broke, the app continued to sit near the top of Apple's App Store charts. On Monday, it was in the No. 4 slot on the chart behind only ChatGPT, Threads, and Google.
Tea declined to comment.
Cybersecurity in the AI era
The cybersecurity issues raised by the Tea app breach — namely that emerging apps can often be less secure and that people are willing to hand over very sensitive information to them — could get even worse in the era of AI.
Why? There are a few reasons.
First, there's the fact that people are getting more comfortable sharing sensitive information with apps, especially AI chatbots, whether that's ChatGPT, Meta AI, or specialized chatbots trying to replicate therapy. This has already led to mishaps. Take Meta's AI app's "discover" feed, for example. In June, Business Insider reported that people were publicly sharing — seemingly accidentally — some quite personal exchanges with Meta's AI chatbot.
Then there's the rise of vibe coding, which security experts say could lead to dangerous app vulnerabilities.
Vibe coding, when people use generative AI to write and refine code, has been a favorite tech buzzword this year. Meanwhile, tech startups like Replit, Loveable, and Cursor have become highly valued vibe-coding darlings.
But as vibe coding becomes more mainstream — and potentially leads to a geyser of new apps — cybersecurity experts have concerns.
Brandon Evans, a senior instructor at the SANS Institute and cybersecurity consultant, told BI that vibe coding can "absolutely result in more insecure applications," especially as people build quickly and take shortcuts.
(It's worth noting that while some public discourse on social media around Tea's breach includes criticisms of vibe coding, some security experts said they doubted the platform itself used AI to generate its code.)
"One of the big risks about vibe coding and AI-generated software is what if it doesn't do security?" Coates said. "That's what we're all pretty concerned about."
Rahjerdi told BI that the advent of vibe coding is what prompted him to start investigating "more and more projects recently."
For Semgrep's Evans, vibe coding itself isn't the problem — it's how it interacts with developers' incentives more generally. Programmers often want to move fast, he said, speeding through the security review process.
"Vibe-coding means that a junior programmer can suddenly be inside a racecar, rather than a minivan," he said.
But vibe coded or not, consumers should "actively think about what you're sending to these organizations and really think about the worst case scenario," the SANS Institute's Evans said.
"Consumers need to understand that there will be more breaches, not just because applications are being developed faster and arguably worse, but also because the adversaries have AI on their side as well," he added. "They can use AI to come up with new attacks to get this data too."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
9 minutes ago
- Yahoo
OpenAI in talks for share sale valuing startup at $500 billion, Bloomberg News reports
(Reuters) -ChatGPT maker OpenAI is in early talks about a potential secondary sale of stock for current and former employees at a valuation of about $500 billion, Bloomberg News reported on Tuesday. Reuters could not immediately verify the report. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
9 minutes ago
- Yahoo
It's not you, it's me. ChatGPT doesn't want to be your therapist or friend
In a case of "it's not you, it's me," the creators of ChatGPT no longer want the chatbot to play the role of therapist or trusted confidant. OpenAI, the company behind the popular bot, announced that it had incorporated some 'changes,' specifically mental health-focused guardrails designed to prevent users from becoming too reliant on the technology, with a focus on people who view ChatGPT as a therapist or friend. The changes come months after reports detailing negative and particularly worrisome user experiences raised concerns about the model's tendency to 'validate doubts, fuel anger, urge impulsive actions, or reinforce negative emotions [and thoughts].' The company confirmed in its most recent blog post that an update made earlier this year made ChatGPT 'noticeably more sycophantic,' or 'too agreeable,' 'sometimes saying what sounded nice instead of what was helpful.' OpenAI announced they have 'rolled back' certain initiatives, including changes in how they use feedback and their approach to measuring 'real-world usefulness over the long term, not just whether you liked the answer in the moment.' 'There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,' OpenAI wrote in an Aug. 4 announcement. 'While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.' Here's what to know about the recent changes to ChatGPT, including what these mental health guardrails mean for users. ChatGPT integrates 'changes' to help users thrive According to OpenAI, the 'changes' were designed to help ChatGPT users 'thrive.' 'We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,' OpenAI said. 'To us, helping you thrive means being there when you're struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.' The company said its 'working closely' with experts, including physicians, human-computer-interaction (HCI) researchers and clinicians as well as an advisory group, to improve how 'ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.' Thanks to recent 'optimization,' ChatGPT is now able to: Engage in productive dialogue and provide evidence-based resources when users are showing signs of mental/emotional distress Prompt users to take breaks from lengthy conversations Avoid giving advice on 'high-stakes personal decisions,' instead ask questions/weigh pros and cons to help users come up with a solution on their own 'Our goal to help you thrive won't change. Our approach will keep evolving as we learn from real-world use,' OpenAI said in its blog post. 'We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work.' This article originally appeared on USA TODAY: ChatGPT adds mental health protections for users: See what they are Solve the daily Crossword


CNBC
11 minutes ago
- CNBC
Two Chinese nationals charged for illegally shipping Nvidia AI chips to China
Two Chinese nationals in California have been arrested and charged with the illegal shipment of tens of millions of dollars' worth of AI chips, the Department of Justice said Tuesday. Chuan Geng, 28, and Shiwei Yang, 28, exported the sensitive chips and other technology to China from October 2022 through July 2025 without obtaining the required licenses, the DOJ said, citing an affidavit filed with the complaint. The illicit shipments included Nvidia's H100 general processing units, according to the affidavit seen by Reuters. The H100 is amongst the U.S. chipmaker's most cutting-edge chips used in artificial intelligence allocations. The Department of Commerce has placed such chips under export controls since 2022 as part of broader efforts by the U.S. to restrict China's access to the most advanced semiconductor technology. This case demonstrates that smuggling is a "nonstarter," Nvidia told CNBC. "We primarily sell our products to well-known partners, including OEMs, who help us ensure that all sales comply with U.S. export control rules." "Even relatively small exporters and shipments are subject to thorough review and scrutiny, and any diverted products would have no service, support, or updates," the chipmaker added. Geng and Yang's California-based company, ALX Solutions, had been founded shortly after the U.S. chip controls first came into place. According to the DOJ, law enforcement searched ALX Solutions' office and seized the phones belonging to Geng and Yang, which revealed incriminating communications between the defendants, including communications about evading U.S. export laws by shipping the export-controlled chips to China through Malaysia. The review also showed that in December 2024, ALX Solutions made over 20 shipments from the U.S. to shipping and freight-forwarding companies in Singapore and Malaysia, which the DOJ said are commonly used as transshipment points to conceal illicit shipments to China. ALX Solutions did not appear to have been paid by entities they purportedly exported goods to, instead receiving numerous payments from companies based in Hong Kong and China. The U.S. Department of Commerce's Bureau of Industry and Security and the FBI are continuing to investigate the matter. The smuggling of advanced microchips has become a growing concern in Washington. According to a report from the Financial Times last month, at least $1 billion worth of Nvidia's chips entered China after Donald Trump tightened chip export controls earlier this year. In response to the report, Nvidia had said that data centers built with smuggled chips were a "losing proposition" and that it does not support unauthorized products.