logo
The Tea App Data Breach: What Was Exposed and What to Know About the Class Action Lawsuit

The Tea App Data Breach: What Was Exposed and What to Know About the Class Action Lawsuit

CNET6 days ago
Tea, a women's safety dating app that surged to the top of the free iOS App Store listings, suffered a major security breach last week. The company confirmed Friday that it "identified authorized access to one of our systems" that exposed thousands of user images. And now we know that DMs were accessed during the breach, too.
Tea's preliminary findings from the end of last week showed the data breach exposed approximately 72,000 images: 13,000 images of selfies and photo identification that people had submitted during account verification, and 59,000 images that were publicly viewable in the app from posts, comments and direct messages.
Those images had been stored in a "legacy data system" that contained information from more than two years ago, the company said in statement. "At this time, there is no evidence to suggest that current or additional user data was affected."
Earlier Friday, posts on Reddit and 404 Media reported that Tea app users' faces and IDs had been posted on anonymous online message board 4chan. Tea requires users to verify their identities with selfies or IDs, which is why driver's licenses and pictures of people's faces are in the leaked data.
And on Monday, a Tea spokesperson confirmed to CNET that it additionally "recently learned that some direct messages (DMs) were accessed as part of the initial incident." Tea has also taken the affected system offline. That confirmation followed a report by 404 Media on Monday that an independent security researcher discovered it would have been possible for hackers to gain access to DMs between Tea users, affecting messages sent up to last week on the Tea app.
Tea said it has launched a full investigation to assess the scope and impact of the breach.
Class action lawsuit filed
One of the users of the Tea app, Griselda Reyes, has filed a class action lawsuit on behalf of herself and other Tea users affected by the data breach. According to court documents filed on July 28, as reported earlier by 404 Media, Reyes is suing Tea over its alleged "failure to properly secure and safeguard ... personally identifiable information."
"Shortly after the data breach was announced, internet users claimed to have mapped the locations of Tea's users based on metadata contained from the leaked images," the complaint alleges. "Thus, instead of empowering women, Tea has actually put them at risk of serious harm."
Tea also has yet to notify its customers personally about their data being breached, the complaint alleges.
The complaint is seeking class action status, damages for those affected "in an amount to be determined" and certain requirements for Tea to improve its data storage and handling practices.
Tea and Cole & Van Note, the law firm representing Reyes, did not immediately respond to requests for comment on the class action lawsuit.
What is Tea?
The premise of Tea is to provide women with a space to report negative interactions they've had while encountering men in the dating pool, with the intention of keeping other women safe.
The app is currently sitting at the No. 2 spot for free apps on Apple's US App Store, right after ChatGPT, drawing international attention and sparking a debate about whether the app violates men's privacy. Following the news of the data breach, it also plays into the wider ongoing debate around whether online identity and age verification pose an inherent security risk to internet users.
In the privacy section on its website, Tea says: "Tea Dating Advice takes reasonable security measures to protect your Personal Information to prevent loss, misuse, unauthorized access, disclosure, alteration and destruction. Please be aware, however, that despite our efforts, no security measures are impenetrable."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tea's data breach shows why you should be wary of new apps — especially in the AI era
Tea's data breach shows why you should be wary of new apps — especially in the AI era

Business Insider

time2 hours ago

  • Business Insider

Tea's data breach shows why you should be wary of new apps — especially in the AI era

Mobile apps are easier to build than ever — but that doesn't mean they're safe. Late last month, Tea, a buzzy app where women anonymously share reviews of men, suffered a data breach that exposed thousands of images and private messages. As cybersecurity expert Michael Coates put it, the impact of Tea's breach was that it exposed data "otherwise assumed to be private and sensitive" to anyone with the "technical acumen" to access that user data — and "ergo, the whole world." Tea confirmed that about 72,000 images — including women's selfies and driver's licenses — had been accessed. Images from the app were then posted to 4chan, and within days, that information spread across the web on platforms like X. Someone made a map identifying users' locations, and a website where Tea users' verification selfies were ranked side-by-side. It wasn't just images that were accessible. Kasra Rahjerdi, a security researcher, told Business Insider he was able to access more than 1.1 million private direct messages (DMs) between Tea's users. Rahjerdi said those messages included "intimate" conversations about topics such as divorce, abortion, cheating, and rape. The Tea breach was a crude reminder that just because we assume our data is private doesn't mean it actually is — especially when it comes to new apps. "Talking to an app is talking to a really gossipy coworker," Rahjerdi said. "If you tell them anything, they're going to share it, at least with the owners of the app, if not their advertisers, if not accidentally with the world." Isaac Evans, CEO of cybersecurity company Semgrep, said he uncovered an issue similar to the Tea breach when he was a student at MIT. A directory of students' names and IDs was left open for the public to view. "It's just really easy, when you have a big bucket of data, to accidentally leave it out in the open," Evans said. But despite the risks, many people are willing to share sensitive information with new apps. In fact, even after news of the Tea data breach broke, the app continued to sit near the top of Apple's App Store charts. On Monday, it was in the No. 4 slot on the chart behind only ChatGPT, Threads, and Google. Tea declined to comment. Cybersecurity in the AI era The cybersecurity issues raised by the Tea app breach — namely that emerging apps can often be less secure and that people are willing to hand over very sensitive information to them — could get even worse in the era of AI. Why? There are a few reasons. First, there's the fact that people are getting more comfortable sharing sensitive information with apps, especially AI chatbots, whether that's ChatGPT, Meta AI, or specialized chatbots trying to replicate therapy. This has already led to mishaps. Take Meta's AI app's "discover" feed, for example. In June, Business Insider reported that people were publicly sharing — seemingly accidentally — some quite personal exchanges with Meta's AI chatbot. Then there's the rise of vibe coding, which security experts say could lead to dangerous app vulnerabilities. Vibe coding, when people use generative AI to write and refine code, has been a favorite tech buzzword this year. Meanwhile, tech startups like Replit, Loveable, and Cursor have become highly valued vibe-coding darlings. But as vibe coding becomes more mainstream — and potentially leads to a geyser of new apps — cybersecurity experts have concerns. Brandon Evans, a senior instructor at the SANS Institute and cybersecurity consultant, told BI that vibe coding can "absolutely result in more insecure applications," especially as people build quickly and take shortcuts. (It's worth noting that while some public discourse on social media around Tea's breach includes criticisms of vibe coding, some security experts said they doubted the platform itself used AI to generate its code.) "One of the big risks about vibe coding and AI-generated software is what if it doesn't do security?" Coates said. "That's what we're all pretty concerned about." Rahjerdi told BI that the advent of vibe coding is what prompted him to start investigating "more and more projects recently." For Semgrep's Evans, vibe coding itself isn't the problem — it's how it interacts with developers' incentives more generally. Programmers often want to move fast, he said, speeding through the security review process. "Vibe-coding means that a junior programmer can suddenly be inside a racecar, rather than a minivan," he said. But vibe coded or not, consumers should "actively think about what you're sending to these organizations and really think about the worst case scenario," the SANS Institute's Evans said. "Consumers need to understand that there will be more breaches, not just because applications are being developed faster and arguably worse, but also because the adversaries have AI on their side as well," he added. "They can use AI to come up with new attacks to get this data too."

'Just about anybody using Google at this point will end up on Reddit.'
'Just about anybody using Google at this point will end up on Reddit.'

The Verge

time3 hours ago

  • The Verge

'Just about anybody using Google at this point will end up on Reddit.'

Posted Aug 4, 2025 at 1:26 PM UTC 'Just about anybody using Google at this point will end up on Reddit.' That's according to Reddit CEO Steve Huffman, who highlighted the 'mix' of 70 million weekly users browsing Reddit directly, or landing on the platform through Google searches, as reported earlier by SEO Roundtable . Huffman noted that 'external search will continue to be a big driver of new users' even as AI-generated answers pose a threat to the site's traffic. Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Emma Roth Posts from this author will be added to your daily email digest and your homepage feed. See All by Emma Roth Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Reddit Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech

Music lovers aren't happy with Android Auto's refreshed interface
Music lovers aren't happy with Android Auto's refreshed interface

Android Authority

time4 hours ago

  • Android Authority

Music lovers aren't happy with Android Auto's refreshed interface

Andy Walker / Android Authority TL;DR Google recently refreshed the Android Auto interface using Material You to match colors with your phone's wallpaper. As part of the changes, the music player interface now uses wallpaper colors instead of album art and has smaller album art. Some users find the new design bland and unbalanced, compared to the previous dynamic album art backgrounds. Just last week, Google updated Android Auto's head unit interface with Material You, letting your car's display match the wallpaper on your Android phone. While the move is generally positive, especially for an interface that barely sees any wholesale UI refreshes, some users are strongly disliking the change. Reddit user Adil15101 highlighted that in Android Auto's new and updated interface, music player interfaces now adopt the color from your wallpaper instead of the album art. The update also slightly refreshes the music player interface, with the seekbar now positioned right next to the even-smaller album art, giving us a lot more blank space in certain parts of the interface. In the above composite photo, the image at the top shows the older interface, while the image at the bottom shows the newer interface. The refreshed interface for Android Auto music players is a visual downgrade. While matching the UI to the wallpaper brings consistency across the car head unit interface, users lose out on the more exciting backgrounds that they have been used to with music players. Moving the seekbar makes sense since you can't accidentally tap on it when reaching for a button now, but it also gives us a very unbalanced UI with cramped spots between lots of empty spaces. Shrinking the album art also makes little sense when there is so much blank space. Reddit user flcinusa complains about it too since their car has a portrait screen, where the album art is comically tiny. Have you received the refreshed interface on Android Auto? Do you like the changes? Is there something you don't like? Vote below and let us know more in the comments! Do you like Android Auto's refreshed interface? 0 votes Yes, I like the changes. NaN % No, I don't like the changes. NaN % I haven't received the refreshed interface yet. NaN % Follow

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store