logo
The Tea App Data Breach: What Happened, and What Was Exposed

The Tea App Data Breach: What Happened, and What Was Exposed

CNET6 days ago
Tea, a women's safety dating app that surged to the top of the free iOS App Store listings, suffered a major security breach last week. The company confirmed Friday that it "identified authorized access to one of our systems" that exposed thousands of user images. And now, we know that DMs were accessed during the breach, too.
Tea's preliminary findings from the end of last week showed the data breach exposed approximately 72,000 images: 13,000 images of selfies and photo identification that people had submitted during account verification, and 59,000 images that were publicly viewable in the app from posts, comments and direct messages.
Those images had been stored in a "legacy data system" that contained information from more than two years ago, the company said in statement. "At this time, there is no evidence to suggest that current or additional user data was affected."
Earlier Friday, posts on Reddit and 404 Media reported that Tea app users' faces and IDs had been posted on anonymous online message board 4chan. Tea requires users to verify their identities with selfies or IDs, which is why driver's licenses and pictures of people's faces are in the leaked data.
And on Monday, a Tea spokesperson confirmed to CNET that it additionally "recently learned that some direct messages (DMs) were accessed as part of the initial incident." Tea has also taken that affected system offline. That confirmation followed a report by 404 Media on Monday that an independent security researcher discovered it would have been possible for hackers to gain access to DMs between Tea users, affecting messages sent up to last week on the Tea app.
Tea said it has launched a full investigation to assess the scope and impact of the breach.
What is Tea?
The premise of Tea is to provide women with a space to report negative interactions they've had while encountering men in the dating pool, purportedly to keep other women safe.
The app is currently sitting at the No. 2 spot for free apps on Apple's US App Store, right after ChatGPT, drawing international attention and sparking a debate about whether the app violates men's privacy. Following the news of the data breach, it also plays into the wider ongoing debate around whether online identity and age verification pose an inherent security risk to internet users.
In the privacy section on its website, Tea says: "Tea Dating Advice takes reasonable security measures to protect your Personal Information to prevent loss, misuse, unauthorized access, disclosure, alteration and destruction. Please be aware, however, that despite our efforts, no security measures are impenetrable."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Verizon confirms bad news for customers via email — and there's already a backlash
Verizon confirms bad news for customers via email — and there's already a backlash

Tom's Guide

time37 minutes ago

  • Tom's Guide

Verizon confirms bad news for customers via email — and there's already a backlash

Verizon customers aren't having the best week. First, we heard the carrier was planning to increase fees, and now it appears that Verizon is raising fees and removing discounts from some plans, leaving customers forced to spend extra money. This is likely to upset some people and potentially lead them to switch to one of the other top phone carriers, such as T-Mobile, AT&T, or any of the popular MVNOs, which can be significantly more affordable. The gigantic mobile carrier is pulling loyalty discounts from customer accounts on September 1, at least that's what emails posted on Reddit claim. "We are writing to let you know that a discount on your account will soon end. This discount will be removed no sooner than September 1, 2025. It is important to us to be upfront and transparent when changes are made to your account," reads the alleged email. Of course, Verizon reminds impacted customers that it continues "to invest in offering the very best experience and most value for your money in the industry," but that may not be good enough for some users who are frustrated by this change. One Reddit user said, "They're ending loyalty discounts, which was the only thing keeping me with Verizon. Guess I'm gonna port out." It appears Verizon wants to get customers on older plans to switch, and this could be another attempt to persuade them. In the email, the company even says, "We want to ensure you get the best value and experience from Verizon and encourage you to check out our myPlan options for the plan that works best for you." Get instant access to breaking news, the hottest reviews, great deals and helpful tips. It's a bold move from Verizon, as it could lead customers to port their numbers out instead of switching to myPlan. Perhaps people on the older plans with loyalty discounts aren't as profitable, and Verizon figures it's better to take the risk of losing them by moving them to more lucrative plans. But that's speculation. In the meantime, read why we recently updated our top pick for best phone carrier from Verizon to T-Mobile. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

OpenAI says ChatGPT is on track to reach 700M weekly users
OpenAI says ChatGPT is on track to reach 700M weekly users

Yahoo

timean hour ago

  • Yahoo

OpenAI says ChatGPT is on track to reach 700M weekly users

ChatGPT's impressive growth as a consumer app continues as the chatbot is on track to hit 700 million weekly active users this week, the company says. The app had earlier reached 500 million weekly active users as of the end of March, noted Nick Turley, OpenAI VP and head of ChatGPT's app, in a post on X. He also said the app has grown 4x since last year. 'Every day, people and teams are learning, creating, and solving harder problems. Big week ahead. Grateful to the team for making ChatGPT more useful and delivering on our mission so everyone can benefit from AI,' he posted. The app's popularity increased after OpenAI launched an upgraded image generation feature, powered by the GPT-4 model, in March. In early April, the company's COO, Brad Lightcap, said that more than 130 million users had created over 700 million images in just a few days after the launch. The company has also seen an increase in subscribers. Last week, Lightcap said that ChatGPT had 5 million paying business users, up from 3 million in June. In a recent report, market intelligence firm Sensor Tower noted that users are using ChatGPT for more than 12 days a month on average, only behind Google and X. The report also said that in H1 2025, users spent an average of 16 minutes per day on the app. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Teens Are Flocking to AI Chatbots. Is this Healthy?
Teens Are Flocking to AI Chatbots. Is this Healthy?

Scientific American

timean hour ago

  • Scientific American

Teens Are Flocking to AI Chatbots. Is this Healthy?

Relationships are messy, whether you are an adult with lots of experience or a kid navigating tough times with a best friend, boyfriend or girlfriend. You can't predict moods, interests or desires. For teens l earning the ins and outs of relationships for the first time, disagreements, fights and breakups can be crushing. But what if your teen's best friend wasn't actually human? It may seem far-fetched, but it's not. A new report from Common Sense Media says that 72 percent of teens surveyed have used AI companions, and 33 percent have relationships or friendships with these chatbots. The language that AI companions use, the responses they make, and the empathy they exude can make a user feel as though they truly understand and sympathize. These chatbots can make someone feel liked or even loved. They are programmed to help users feel like they've made a real connection. And as adolescents have a naturally developing fascination with romance and sexuality, if you feel ignored by the girls in your high school, well, now, on the nearest screen is a hot girlfriend who is constantly fascinated by you and your video games, or a super cute boyfriend whom you never had to engage in small talk with to form a bond. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. This may be perplexing to some parents, but if your child is navigating the complex worlds of technology, social media and artificial intelligence, the likelihood they will be curious about an AI companion is pretty high. Here's what you need to know to help them. Chatbots have been around for a long time. In 1966 an MIT professor named Joseph Weizenbaum created the first chatbot, named ELIZA. Today AI and natural language processing have sprinted far past ELIZA. You probably have heard of ChatGPT. But some of the common companion AI platforms are ones you might not be familiar with: Replika, and My AI are just a few. In 2024 Mozilla counted more than 100 million downloads of a group of chatbot apps. Some apps set 18 as a minimum age requirement, but it's easy for a younger teen to get around that. You might think your kid won't get attached, that they will know this chatbot is an algorithm designed to give responses based on the text inputs they receive; that it's not 'real.' But a fascinating Stanford University study of students who use the app Replika found that 81 percent considered their AI companion to have 'intelligence,' and 90 percent thought it 'human-like.' On the plus side, these companions are sometimes touted for their supportiveness and promotion of mental health; the Stanford study even found that 3 percent of users felt their Replika had directly helped them avoid suicide. If you're a teenager who is marginalized, isolated or struggling to make friends, an AI companion can provide much-needed companionship. They may offer practice when it comes to building conversational and social skills. Chatbots can offer helpful information and tips. But are they safe? A Florida mother has sued the company that owns alleging the chatbot formed an obsessive relationship with her 14-year-old son, Sewell Setzer III, and ultimately encouraged him to attempt suicide (which he tragically completed). Another suit filed in 2024 alleges that the same chatbot encourages self-harm in teens and violence towards parents who try to set limits on how often kids use the app. Then there's privacy: Wired, drawing on Mozilla's research, labeled AI companions a ' privacy nightmare,' many crawling with data trackers that might manipulate users into thinking a chatbot is their soulmate, encouraging negative or harmful behaviors. Given what we know about teens, screens and mental health, online influences are sometimes powerful, largely unavoidable, and potentially life-changing for children and families. So what do you do? Remind kids that human friends offer so much that AI companions don't. IRL friendships are challenging, and this is a good thing. Remind them that in their younger years, play is how they learned new skills; if they didn't know how to put LEGOs together, they learned with a new friend. If they struggled with collaboration and cooperation, play taught them how to take turns, and how to adjust based on their playmates' responses. Friends give children practice with the ins and outs of relationships. A friend can be tired, crabby or overexcited. They might be lots of fun, but also easily frustrated; or maybe they're sometimes boring, but very loyal. Growing up, a child has to learn how to take into account their friend's personality and quirks, and they have to learn how to keep the friendship going. Maybe most poignantly, they learn how incredibly valuable friends are when things get tough. In cases of social stress, like bullying, the support of a friend who sticks by you is priceless. In my study of more than 1,000 teenagers in 2020, keeping close to a friend was by far the most helpful strategy for kids who said they were the targets of bullies. Another study of more than 1,000 teens found that IRL friends can lessen the effects of problematic social media use. If they are curious about AI companions, educate them. This can increase their skepticism and awareness about these programs and why they exist (and why they're often free). It's important to acknowledge the pluses as well as the minuses of digital companionship. AI companions can be very supportive; they're never fuming on the school bus because their mother made them wear a sweater on a cold morning, they're never jealous when you have a new girlfriend, and they never accuse you of ignoring their needs. But they won't teach you how to handle things when they drop you for a new best friend, or when they develop an interest that you just can't share. Discussing profit motives, personal security risks and social or emotional risks doesn't guarantee that a teenager won't go online and get an AI girlfriend; but it will at least plant the seeds of a healthy doubt. It may be important to identify high-risk kids who already struggle with social skills or making friends, and who may be particularly vulnerable to toxic AI companions. In a world populated by children with generally depleted social skills, eliminating the complex, sometimes awkward, human factor can feel like a great advantage, at least in the short term. In a preliminary analysis of 1,983 teens in three states, I found that of the kids who made romantic connections online, 50 percent said they liked that approach because it eliminated the need for meeting, talking and all the other awkward 'stuff' you have to do in person with someone. That said, most teens don't report having any serious problems or outcomes from their online activities. In a preliminary analysis of a 2022 study that I recently presented at a conference, only 3 percent of 642 older teens from Colorado, Massachusetts, and Virginia reported that they had ever had a significant (i.e., non-minor) online problem. We hear about online problems so frequently that we tend to assume they're common; but that doesn't appear to be the case. I don't think it's inevitable that human friendships will be uniformly abandoned for AI ones, resulting in catastrophic loneliness and loss of online privacy. Finally, keep the conversations going, and don't feel like you need to know everything. In a 2015 study, I found that fully two thirds of the teenagers whose parents discussed digital behaviors reported that their parents' opinions and thoughts were quite helpful. If your child knows something about AI companions that you don't, let them enjoy educating you. AI companions may become a transformative social and technological development, raising questions about trust, ethics marketing, and relationships, and we need to help youth prepare as best we can. Research has long established that it's developmentally appropriate for children and teenagers to crave the attention and approval of their peers. It's going to be easy for some to choose virtual friends over real ones. Stay engaged, learn about the platforms they are using, and remind them of the value of struggle and conflict. They likely will be all right.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store