
The Tea App Data Breach: What Was Exposed and What We Know About the Class Action Lawsuit
Tea's preliminary findings from the end of last week showed the data breach exposed approximately 72,000 images: 13,000 images of selfies and photo identification that people had submitted during account verification, and 59,000 images that were publicly viewable in the app from posts, comments and direct messages.
Those images had been stored in a "legacy data system" that contained information from more than two years ago, the company said in statement. "At this time, there is no evidence to suggest that current or additional user data was affected."
Earlier Friday, posts on Reddit and 404 Media reported that Tea app users' faces and IDs had been posted on anonymous online message board 4chan. Tea requires users to verify their identities with selfies or IDs, which is why driver's licenses and pictures of people's faces are in the leaked data.
And on Monday, a Tea spokesperson confirmed to CNET that it additionally "recently learned that some direct messages (DMs) were accessed as part of the initial incident." Tea has also taken the affected system offline. That confirmation followed a report by 404 Media on Monday that an independent security researcher discovered it would have been possible for hackers to gain access to DMs between Tea users, affecting messages sent up to last week on the Tea app.
Tea said it has launched a full investigation to assess the scope and impact of the breach.
Class action lawsuit filed
One of the users of the Tea app, Griselda Reyes, has filed a class action lawsuit on behalf of herself and other Tea users affected by the data breach. According to court documents filed on July 28, as reported earlier by 404 Media, Reyes is suing Tea over its alleged "failure to properly secure and safeguard ... personally identifiable information."
"Shortly after the data breach was announced, internet users claimed to have mapped the locations of Tea's users based on metadata contained from the leaked images," the complaint alleges. "Thus, instead of empowering women, Tea has actually put them at risk of serious harm."
Tea also has yet to notify its customers personally about their data being breached, the complaint alleges.
The complaint is seeking class action status, damages for those affected "in an amount to be determined" and certain requirements for Tea to improve its data storage and handling practices.
Scott Edward Cole of Cole & Van Note, the law firm representing Reyes, told CNET he is "stunned" by the alleged lack of security protections in place.
"This application was advertised as a safe place for women to share information, sometimes very intimate information, about their dating experiences. Few people would take that risk if they'd known Tea Dating put such little effort into its cybersecurity," Cole alleged. "One chief goal of our lawsuit is to compel the company to start taking user privacy a lot more seriously."
Tea did not immediately respond to a request for comment on the class action lawsuit.
What is the Tea app?
The premise of Tea is to provide women with a space to report negative interactions they've had while encountering men in the dating pool, with the intention of keeping other women safe.
The app is currently sitting at the No. 2 spot for free apps on Apple's US App Store, right after ChatGPT, drawing international attention and sparking a debate about whether the app violates men's privacy. Following the news of the data breach, it also plays into the wider ongoing debate around whether online identity and age verification pose an inherent security risk to internet users.
In the privacy section on its website, Tea says: "Tea Dating Advice takes reasonable security measures to protect your Personal Information to prevent loss, misuse, unauthorized access, disclosure, alteration and destruction. Please be aware, however, that despite our efforts, no security measures are impenetrable."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
22 minutes ago
- Tom's Guide
This new NordVPN feature lets you know when sites aren't working
NordVPN, the best VPN on the market based on our testing, has released a brand-new feature which shows which websites and services are experiencing outages. Named (rather aptly) "Is it down?", this new feature analyzes user reports in order to offer insight into the current status of many different platforms, including Reddit, Instagram, Twitter, Facebook, Xbox Network, and YouTube. If user reports about a site to "Is it down?" pass a 75% threshold of a set baseline, the operational status of that site is then updated. The site's recovery is then tracked as reports slow. Discussing "Is it down?", Marijus Briedis, Chief Technology Officer (CTO) at NordVPN said "by providing crowdsourced visibility into service health, we aim to empower users with first-hand information." NordVPN: our top-rated VPN overallFrom our testing, we consider NordVPN to be the best VPN for most people. This is down to its rock-solid security and privacy, excellent speeds and great unblocking performance. Prices start from £2.31 / $2.91 per month for a two-year subscription, which includes an exclusive four months free for Tom's Guide readers. Plus, you can get an Amazon gift card worth up to £50 / $50 if you sign up for NordVPN's Plus or Complete memberships. A 30-day money-back guarantee applies to all subscriptions. "Is it down?" functions using a system that establishes the activity of a service or website by recalculating a baseline for that site or service every 30 minutes. This recalculation is based on the past 24 hours of user reports. Possible issues or outages are flagged by dynamic thresholds set at 50% and 75% of this baseline, with recovery tracked as reports drop below these thresholds. Only when enough data is collected do status updates occur. Bredis noted that "understanding whether an issue is widespread or localized helps reduce frustration and informs smarter responses," as website or service interruptions can be caused by a number of different issues. This could be user-specific like DNS issues or ISP outages, or impact all users, like DDoS cyber attacks, scheduled maintenance or software updates. However, it's important to note that as it relies on user reports alone, "Is it down?" cannot confirm official outages, nor can it guarantee that it is completely accurate. Instead, it can inform users when brief glitches or extended outages occur. NordVPN has said that the introduction of "Is it down?" is a continuation of its aims to "build tools that promote digital awareness and resilience." "Is it down?" is the latest in cybersecurity features launched by NordVPN, alongside Threat Protection Pro, post-quantum encryption and ID theft recovery – our NordVPN review explores these features in more detail. The company also noted that by giving internet users the ability to independently check and report site and service outages, it aims to empower them in their navigation of the online world. Briedis explained further: "As our digital lives become more complex, timely and accurate information is no longer optional, it's essential." "We believe that features like the 'Is it down?' checker help bring transparency to internet reliability and reinforce our mission to make the internet more accessible for everyone." "Is it down?" is now available, free to use, whether you're a NordVPN user or not. We test and review VPN services in the context of legal recreational uses. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protecting your online security and strengthening your online privacy when abroad. We do not support or condone the illegal or malicious use of VPN services. Consuming pirated content that is paid-for is neither endorsed nor approved by Future Publishing.


CNET
22 minutes ago
- CNET
Save $29 on Apple AirPods 4 Wireless Earbuds, but Make Sure You Do It Soon
It's fair to say that few wireless earbuds are as iconic as Apple's AirPods. In fact, we're not sure there are any. The latest Apple AirPods 4 continue to set the standard, boasting enhanced features and a refined design that makes them more comfortable to wear than previous iterations. But these are still AirPods, which means that they don't come cheap. That is, unless you happen to buy them at the right time. Like today. Right now, Amazon is offering these for just $100, after a sweet 22% (or $29) discount. This isn't the lowest we've seen these earbuds fall to, but it isn't all that far from it. Just make sure to order yours soon -- these deals don't tend to hang around for long. Apple AirPods 4 are available in two formats, and this is the version without active noise cancellation. These AirPods will usually run you $129 (the ANC model typically retails for $179), but with this discount, you'll pay only $100. These AirPods have been redesigned so they can fit more comfortably. Apple also included an H2 chip that allows for voice isolation whenever you're in situations that require you to take calls in a noisy environment. The AirPods 4 also offer personalized spatial audio and a battery life of up to 20 hours. Hey, did you know? CNET Deals texts are free, easy and save you money. These latest AirPods are dust- and waterproof and include a smaller case for easier carrying. You can also use them with Siri for added voice control options, which lets you start calls or choose your audio. Plus, you can now shake your head or nod to respond to any prompts received. Looking for deals on other new Apple devices? We're rounding up the best iPhone 16 and Apple Watch Series 10 deals, too. Check out our list of the best AirPods deals if you want to shop previous models. We also have a list of the best wireless earbuds deals in case you still want to compare models. Fancy picking up a pair of AirPods 4 with ANC? They're currently on offer for just $149, a nice $20 discount. Why this deal matters Apple makes solid audio gear, and the AirPods offer good sound quality and a comfortable fit for most people. These AirPods don't have noise-canceling features, but for just $100, this is a deal you can't pass up.


CNET
an hour ago
- CNET
If You're Using ChatGPT for Any of These 11 Things, Stop Right Now
There are a lot of things you can do with ChatGPT, and it seems like that list is getting longer everyday. The powerful LLM chatbot can help you create a budget, plan your meals for the week, reach your health goals and make writing or coding easier. But you don't want to give ChatGPT carte blanche in your life. It's not great at everything -- in fact, it can be downright sketchy at a lot of things. ChatGPT sometimes hallucinates information and passes it off as fact, and it may not always have up-to-date information. It's incredibly confident, even when it's straight up wrong. (The same can be said about other generative AI tools, too, of course.) That matters the higher the stakes get, like when taxes, medical bills, court dates or bank balances enter the chat. If you're unsure about when turning to ChatGPT might be risky, here are 11 scenarios when you should put down the AI and choose another option. Don't use ChatGPT for any of the following. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) 1. Diagnosing physical health issues I've definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore over potential diagnoses, you could swing from dehydration and the flu to some type of cancer. I have a lump on my chest and entered that information into ChatGPT. Lo and behold, it told me I may have cancer. In fact, I have a lipoma, which is not cancerous and occurs in 1 in every 1,000 people. My licensed doctor told me that. I'm not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you can walk in better prepared. And that could help make doctor visits less overwhelming. However, AI can't order labs or examine you, and it definitely doesn't carry malpractice insurance. Know its limits. 2. Taking care of your mental health ChatGPT can offer grounding techniques, sure, but it can't pick up the phone when you're in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist. CNET's Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst. ChatpGPT doesn't have lived experience, can't read your body language or tone, and has zero capacity for genuine empathy. It can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn't. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work -- the hard, messy, human work -- to an actual human who is trained to properly handle it. If you or someone you love is in crisis, please dial 988 in the US, or your local hotline. 3. Making immediate safety decisions If your carbon-monoxide alarm starts chirping, please don't open ChatGPT and ask it if you're in real danger. I'd go outside first and ask questions later. Large language models can't smell gas, detect smoke or dispatch an emergency crew. In a crisis, every second you spend typing is a second you're not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder. 4. Getting personalized financial or tax planning ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, state tax bracket, filing status, deductions, retirement goals or risk appetite. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter. I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot simply can't replace a CPA who can catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI. Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information. 5. Dealing with confidential or regulated data As a tech journalist, I see embargoes land in my inbox every day, but I've never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That's because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement. The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It applies to your income taxes, birth certificate, driver's license and passport. Once sensitive information is in the prompt window, you can't guarantee where it's stored, who can review it internally or whether it may be used to train future models. ChatGPT also isn't immune to hackers and security threats. If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT. 6. Doing anything illegal This one is self-explanatory. 7. Cheating on schoolwork I'd be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I'm not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame. Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear "ChatGPT voice" a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It's best to use ChatGPT as a study buddy, not a ghostwriter. You're also just cheating yourself out of an education if you have ChatGPT do the work for you. 8. Monitoring information and breaking news Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source. However, it won't stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet. 9. Gambling I've actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men's basketball championship, but I would never recommend it to anyone. I've seen ChatGPT hallucinate and provide incorrect information on player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky. ChatGPT can't see tomorrow's box score, so don't rely on it solely to get you that win. 10. Drafting a will or other legally binding contract ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away. However, the moment you ask it to draft actual legal text, you're rolling the dice. Estate and family-law rules vary by state, and sometimes even by county, so skipping a witness signature or omitting the notarization clause can get your whole document tossed. Rather, let ChatGPT help you build a checklist of questions for your lawyer, then pay that lawyer to turn that checklist into a document that stands up in court. 11. Making art This isn't an objective truth, just my own opinion, but I don't believe AI should be used to create art. I'm not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that's supplementation, not substitution. By all means, use ChatGPT, but please don't use it to make art that you then pass off as your own. It's kind of gross.