logo
72,000 women's photos and locations exposed in nightmare data breach

72,000 women's photos and locations exposed in nightmare data breach

News.com.aua day ago
A once-empowering app designed to protect women from catfishers, criminals and creeps has turned into a privacy nightmare.
Tea Dating Advice, the female-only app that soared to popularity this July, was hailed a game-changer in identifying dodgy men and enabling women to 'swipe safely.'
The Yelpâ€'style platform grants women access to a forum where they can post anonymous ratings of men they're seeing and ask other women for 'tea' - aka gossip/dirt.
Think of it like writing a scathing review of that regretful Tinder date you went on with the footâ€'fetish guy, or simply venting about your ex.
The app is armed with AI-powered features to help you steer clear of danger in the chaos of modern dating: including background checks, reverse image lookups, sex offender registry searches, and the ability to set alerts on men's names.
The 'revolutionary' dating tool had a rapid rise, but an equally fast downfall.
Youth and pop culture magazine Dazed argued that the app's model teeters on 'digital vigilantism', allowing women to post photos of men without their consent, raising serious privacy questions.
And now those very concerns are front and centre.
Quietly launched in 2023 by tech founder Sean Cook, Tea remained under the radar, only rocketing to number one on Apple's US App Store at the beginning of July 2025.
Last Friday, the company confirmed it suffered 'unauthorised access to an archived data system', exposing roughly 72,000 userâ€'submitted images.
13,000 selfies and photo IDs used for verification, plus 59,000 images from posts, comments and direct messages dating back over two years.
Tea's Instagram statement insisted the breach affected only a 'legacy storage system' and that users who signed up after February 2024 were not impacted.
'We've acted fast and we're working with some of the most trusted cybersecurity experts,' the company said.
Tea claims it stored those images to comply with cyber-bullying prevention laws. But critics have questioned - if photo IDs were supposed to be deleted after verification, why did thousands remain in an exposed archive?
In 2023, the app reportedly stopped requiring photo IDs for new signups, yet the breached data set still contained thousands of them.
While Tea was celebrating two million new users, an anonymous 4chan user posted a database of photos, including location-tied data and even a map - which sparked heated discussion across X and Facebook. These posts have since been removed.
It's likely no coincidence that 4chan, infamous for online harassment, was a vocal critic of the app, with some users calling for it to be hacked in the days after it blew up on social media.
Tea is just the latest target in a broader debate about women using tech to stay safe and the backlash that often follows.
Apps and groups aimed at outing cheaters or dangerous men have gained traction on social media, but they've also triggered lawsuits, hate campaigns, and arguments about defamation and privacy.
Cook said the app was born out of watching his mother's 'terrifying' experience with online dating.
She was catfished and unknowingly interacted with men who had criminal histories.
But sadly, in trying to fix one safety issue, Cook created another.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

72,000 women's photos and locations exposed in nightmare data breach
72,000 women's photos and locations exposed in nightmare data breach

News.com.au

timea day ago

  • News.com.au

72,000 women's photos and locations exposed in nightmare data breach

A once-empowering app designed to protect women from catfishers, criminals and creeps has turned into a privacy nightmare. Tea Dating Advice, the female-only app that soared to popularity this July, was hailed a game-changer in identifying dodgy men and enabling women to 'swipe safely.' The Yelpâ€'style platform grants women access to a forum where they can post anonymous ratings of men they're seeing and ask other women for 'tea' - aka gossip/dirt. Think of it like writing a scathing review of that regretful Tinder date you went on with the footâ€'fetish guy, or simply venting about your ex. The app is armed with AI-powered features to help you steer clear of danger in the chaos of modern dating: including background checks, reverse image lookups, sex offender registry searches, and the ability to set alerts on men's names. The 'revolutionary' dating tool had a rapid rise, but an equally fast downfall. Youth and pop culture magazine Dazed argued that the app's model teeters on 'digital vigilantism', allowing women to post photos of men without their consent, raising serious privacy questions. And now those very concerns are front and centre. Quietly launched in 2023 by tech founder Sean Cook, Tea remained under the radar, only rocketing to number one on Apple's US App Store at the beginning of July 2025. Last Friday, the company confirmed it suffered 'unauthorised access to an archived data system', exposing roughly 72,000 userâ€'submitted images. 13,000 selfies and photo IDs used for verification, plus 59,000 images from posts, comments and direct messages dating back over two years. Tea's Instagram statement insisted the breach affected only a 'legacy storage system' and that users who signed up after February 2024 were not impacted. 'We've acted fast and we're working with some of the most trusted cybersecurity experts,' the company said. Tea claims it stored those images to comply with cyber-bullying prevention laws. But critics have questioned - if photo IDs were supposed to be deleted after verification, why did thousands remain in an exposed archive? In 2023, the app reportedly stopped requiring photo IDs for new signups, yet the breached data set still contained thousands of them. While Tea was celebrating two million new users, an anonymous 4chan user posted a database of photos, including location-tied data and even a map - which sparked heated discussion across X and Facebook. These posts have since been removed. It's likely no coincidence that 4chan, infamous for online harassment, was a vocal critic of the app, with some users calling for it to be hacked in the days after it blew up on social media. Tea is just the latest target in a broader debate about women using tech to stay safe and the backlash that often follows. Apps and groups aimed at outing cheaters or dangerous men have gained traction on social media, but they've also triggered lawsuits, hate campaigns, and arguments about defamation and privacy. Cook said the app was born out of watching his mother's 'terrifying' experience with online dating. She was catfished and unknowingly interacted with men who had criminal histories. But sadly, in trying to fix one safety issue, Cook created another.

The US and China will go head-to-head in the battle to bring about artificial general intelligence
The US and China will go head-to-head in the battle to bring about artificial general intelligence

ABC News

time2 days ago

  • ABC News

The US and China will go head-to-head in the battle to bring about artificial general intelligence

Artificial intelligence is no longer just a race among a group of over-valued American tech companies and over-yachted billionaires. It's still that, but it's now also part of the great contest between the US and China. That means it's less about profits and return on investment, and more about geopolitics — national machismo, security and defence. It also means the coming transition from AI to AGI or artificial general intelligence — where machines theoretically surpass human intelligence — will be brought forward and will be much more significant for the world. It will be as momentous as the discovery of nuclear fission in 1938, which led to the development of nuclear weapons and the United States's first atom bomb test in July 1945, followed a month later by the destruction of Hiroshima and Nagasaki and then four years later by the Soviet Union's first test, followed by 30 years of Cold War. It's not clear which of the US or China will get to AGI first, but the loser won't be four years behind and won't be relying on slow pre-Internet espionage as the Soviets had to. But it looks like the US is playing catch-up: last week China's Unitree Robotics began selling a humanoid robot with artificial intelligence, including voice and image recognition, for just US$5,900, the first in what's expected to be a flood of well-priced AI machines for both the home and workplace (this one doesn't look like it can do the dishes yet, but it won't be long). US President Donald Trump recently launched "AI Action Day". The plan that he presented started with these words: "The United States is in a race to achieve global dominance in artificial intelligence (AI)." Note: dominance, not leadership, and not US companies, but "the United States". Next sentence: "Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits." Note: "military". Yes, AGI will transform warfare, as nuclear weapons did 80 years ago. The Chinese government had much the same idea as Trump exactly eight years ago. Its AI plan was released on July 20, 2017, with the following ambition: "By 2030, China's AI theories, technologies, and applications should achieve world-leading levels, making China the world's primary AI innovation centre". Meanwhile, in the US, a company called Nvidia had just discovered that the chips it was making for video games could work nicely for artificial intelligence. In July 2017, at the same time as the Chinese government was launching its AI strategy, Nvidia and the Chinese equivalent of Google, Baidu, announced a partnership in which Baidu would use Nvidia's chips for AI, and the American company would get access to the Chinese market. On that day in 2017, Nvidia was worth a pretty handy $US100 billion ($152 billion); today it's worth $US4.2 trillion and is the world's most valuable company, after a three-year frenzy that has seen its share price increase 14-fold. The Nvidia-Baidu partnership doesn't seem to have been formally terminated, but it petered out a couple of years ago, around the same time as the US non-profit outfit OpenAI launched ChatGPT, kicking off the next era of AI. Since then, the US government under Joe Biden has been mainly concerned with regulating AI, partly prompted by the "Statement on AI Risk", published on May 30, 2023, by hundreds of AI experts, who declared: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." But now, with Donald Trump, it's no more regulation — none (not that those worried AI experts in 2023 stopped what they were doing or even slowed the development of AI at breakneck speed, as if the need for regulations and the "risk of extinction" didn't exist). The first part of the Trump administration's AI plan involves "removing red tape and onerous regulation". The second part instructs the National Institute of Standards and Technology to "eliminate references to misinformation, diversity, equity, and inclusion, and climate change", and as he signed the executive order on stage at the Mellon Auditorium, Trump said they would be removing "woke Marxist lunacy in the AI models. Once and for all, we are getting rid of woke. OK?" You've got to get your priorities in the right order: deregulation comes before de-woking, but only just. What the US plan does NOT involve is government money, but there is plenty of that available in Silicon Valley and on Wall Street, so no need for the government to abandon tax cuts for the billionaires. Estimates of how much has been spent developing AI by the big US tech companies range from $US800 billion to $US1 trillion, and there is no sign of that slowing. If anything, it's speeding up as the companies approach the much more profitable (they think) singularity of AGI. The Chinese government, meanwhile, is estimated to have spent about $US150 billion developing its AI industry. It was done more cheaply because they were able to use open-source software from the US, a lot of it from OpenAI. No espionage required. Earlier this year, a Chinese company named DeepSeek galvanised the AI world and the US government by releasing an AI model called R1, which is as good as the American ones, much cheaper, and open source. Another Chinese company, Moonshot AI, has released another, better, open-source model called Kimi K2, capable of autonomously doing complex tasks, prompting some commentators to call it another DeepSeek moment, and a significant step towards AGI. But now, as standard AI starts to infiltrate every corner of life and become an explicit source of geopolitical competition between America and China, the question on the minds of everyone involved in the industry is, when will AGI happen? And when (not if) it does happen, what will that mean? For the first question, I asked ChatGPT. It replied: "My best evidence-based guess — based on current trends, expert forecasts, and technical bottlenecks — is that a true AGI tipping point could plausibly occur sometime between 2030 and 2040, but not before 2027, and quite possibly not until 2050 or later." So, between five and 15 years — not long. What will it mean? Plenty, both good and not good. Human-level cognition and autonomy in machines will be profoundly disruptive to humanity. The risks are obvious: the elimination of white-collar jobs leading to high levels of permanent unemployment, collapse of aggregate demand, along with inflation and interest rates, more effective cyber-attacks, autonomous weapons, more sophisticated propaganda and surveillance. And then there's that thing they were warning about in May 2023 — extinction. If the machines are smarter and better-informed than us, can we control them? What if they do to us what we did to the Neanderthals? The benefits are potentially enormous as well: better, more personalised healthcare, an exponential acceleration in science and research, improved productivity, less mundane work, and more leisure. That's why the Productivity Commission's overview for its "five pillars" report in preparation for the productivity roundtable in two weeks says its recommendations will "aim to give people and businesses the confidence and certainty they need to safely adopt powerful new AI tools," although it doesn't specifically talk about AGI. The 25 great and good at the roundtable will probably talk about that for half an hour before getting back to arguing about tax and human industrial relations. Alan Kohler is a finance presenter and columnist on ABC News, and he also writes for Intelligent Investor.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store