
Women are reporting bad men on this app. Here's the legal tea on the app called Tea
Tea is marketed as a 'dating safety tool' for women, and it pledges to donate ten percent of its revenue to the National Domestic Violence Hotline. How does it achieve safety? By letting users post photos and gather information about potential suitors.
Many men online saw the point of how the app might protect women, but a number of people expressed worry about the fairness and the legality of it all, ranging from the philosophical to the personal. 'The risk of abuse is insanely insanely high,' said one Redditor in a discussion on /privacy. 'It seems pretty socially deleterious if any human can have a social media profile they can't view created for them without their consent or knowledge,' one Twitter user wrote. And, of course: 'They got me fellas,' one man posted on TikTok, showing what appeared to be a report about him on the app. ('Welp time to move,' he concluded.)
Then, on July 25, 4chan users claimed they found a database related to the app that included photos of users' IDs and other information, adding more fuel to the privacy fire. (The company confirmed the breach to 404 Media.) So what's the legal tea?
Using AI, the app checks a users' selfies to verify that they are a woman. Once verified, users can post photos of men. These are usually mined from social media profiles and other dating apps. The app enables the photos to be run through a reverse image search, enabling them to run a basic background check, check against public sex offender databases, and check for photos that might get flagged as being used in 'catfishing' — misrepresenting one's identity online.
The app also features a 'Tea Party Group Chat,' which allows users to directly share information about men, and has a rating function, which allows users to share their experiences with Yelp-style reviews, awarding men a 'green flag' or a 'red flag.'
The inclusion of men's names, identities, and other information has triggered people to ask: Does the app enable users to violate others' privacy rights?
The Tea's biggest problems are actually practical, not legal. Tea definitely raises fascinating questions about privacy, the digital age, and norms of modern romance, but largely stays clear of any major legal problems in the U.S. by creating the forum in the way it did.
There really are three buckets (teacups?) of legal issues that could arise in connection with the app: potential civil liability for privacy violations and defamation, and possible criminal exposure related to online behavior.
In general, the right to privacy covers the right to be free from intrusion into one's personal affairs, the ability to control who has access to one's personal information and the right to be free from unwarranted publicity. We all enjoy some right to keep private our affairs and – relevant here – our images.
Many privacy laws are quite clear and straightforward. For instance, it is almost always a clear violation of law to publicize someone's Social Security Number. There are healthcare laws such as the Health Insurance Portability and Accountability Act, or HIPAA, which sets national standards for when and how even medical offices can handle patients' health information. Specific privacy laws govern the disclosure of student records, financial information, personal data and a host of other information people generally want kept secured.
Like so much in our lives, the internet makes everything more complicated. (Perhaps the internet, to paraphrase Homer Simpson, might well be the cause of, and solution to, all the world's problems.) A norm about the world we all accept now is that photographs of us all are likely bouncing around the internet right now – and mostly these are pictures that we posted ourselves.
With the Tea, some men have complained that the very act of enabling users to post photos of them online without their consent violates their privacy rights. It is fair for people to be concerned about their images or likenesses published online. Still, many aspects of our world complicate any ability one might have to challenge the unconsented use of an image on a dating app, at least in the U.S. First, as a legal matter, many of the images that appear on Tea were first voluntarily posted on dating apps or social media sites. When a user posts a photo to the internet or an app like Facebook or Instagram, he or she typically retains the rights to it (i.e., they still own it), but has granted the platform the right to distribute or display it in connection with the service. (To be fair, most users don't read the fine print when they sign up for apps.) It becomes harder to make a privacy rights argument when one has waived those very rights to the photo in another context.
A practical question would come up around enforcing a complaint that one's image appeared on an app without their consent. A staggering number of photos that we didn't post exist online of nearly all of us; we don't have a right to challenge the legality of every group photo ever posted on a social media platform in which we appear. Certainly, Tea raises the bar by explicitly inviting criticism or negative attention based on the photo. But where would the legal line around those photos be? Accompanying a photo with truthful information such as 'I went on a date with this man?' Truthful, but potentially embarrassing information such as 'I went on a date with this man and he was cheap and didn't pay for my latte?' A truthful, but mean-spirited expression of opinion that might bring huge reputational harm, such as 'I went on a date with this man and I honestly worry that he might be a sex offender?' (More on how the law handles individuals' sincere opinions in a moment.) No matter how embarrassing any such posts might be, the legal line around them is a fuzzy one.
In principle, a user could instead raise a copyright complaint if a photo they took and posted to a social media or dating platform gets posted by someone else to another, like Tea. If they are the copyright holder, or owner, of the image and it was posted without their permission and isn't in the public domain, they could perhaps petition Tea to have the image taken down. It does not appear that that approach has been tested yet. Note, however, that people may not have the rights to many photos they appear in; when a person appears in an image, there's a good chance that they didn't take it — unless it was a selfie.
Legally, defamation is the publication of false information that harms someone's reputation. Generally, for a statement or act to be considered defamatory, the following elements must be present: the statement must be made public to at least one other person; the statement must be presented as a fact, not an opinion, and must be untrue; the person publishing the statement must be at fault, either by being aware that the statement is false or being reckless about it; and the statement or act must have caused some damage, whether financial or in the form of emotional distress.
Anything published on Tea is public, so the first prong is easily met. The other criteria are heavily fact-specific, and will depend on the nature of the statement made, the speaker's belief in or research into its truth, and the level of harm caused by it. Even though a lot of the information published (spilled?) on Tea can cause great embarrassment to anyone outed or targeted on it, it is difficult to win most defamation suits based on an individual's sincere expressions of opinion or perceptions of events. Short of knowingly fabricating harmful information about one's date, a user merely expressing her negative opinions about a person or an experience with them are not likely to satisfy most defamation claims.
Likewise, the app itself isn't likely to lose a defamation suit for speech that is presented on its platform. The closest existing parallel might be the array of public Facebook pages that ask 'Are We Dating the Same Guy?' and invite users to post information to help determine whether a man is, well, drinking his tea from multiple mugs at once. Last year, a man sued Spill The Tea Inc. and Meta (the parent company of Facebook) and 27 women in a Chicago-area chat. He said they posted about him and said he was 'very clingy, very fast,' and 'he told me what I wanted to hear until I slept with him, and then he ghosted.' Another posted a link to an article about a man charged with sexually assaulting a woman he met on the app. The man in the photo wasn't the plaintiff, but the suit alleged that the woman used the article to imply that he was the man in the mugshot. But a federal judge threw the suit out, on the basis that none of the statements were false, all were subjective opinions, none were inherently damaging as defined by the law, and that the plaintiff did not establish that any of his photos were used for commercial purposes (as would have been required for him to win under Illinois privacy law).
It can be. Section 230 of the Communications Decency Act generally protects online platforms from being treated as publishers or speakers of content posted by their users, so Tea is largely shielded from liability for what happens on their platforms. (The Section 230 question would get trickier if photos on the app were being used for sex trafficking, but that's not the case here.)
However, individuals on the app could personally face criminal exposure for truly extreme conduct. (Here, we are talking spilling tea that is not just steaming, but boiling hot.) For instance, several states have laws prohibiting 'doxxing,' or releasing unauthorized personal information with an intent to harm or cause someone harassment. Establishing criminal intent that would stand up in court might be tricky, given that a Tea app user could always argue that her intent was to protect other women, not unduly harass the purported victim.
Several states have added electronic communications to their existing harassment and stalking laws, but these laws cover conduct far more egregious than anything that has been publicly reported about appearing on Tea. New Jersey's cyberharassment law, for example, makes it a crime to post obscene materials 'with the intent to emotionally harm a reasonable person or place a reasonable person in fear of physical or emotional harm.' Arizona's doxxing law makes it a crime to post an individual's personal identifying information 'for the purpose of imminently causing the person unwanted physical contact, injury, or harassment.' Other states' laws generally are in the same ballpark and require some form of malicious intent. It can certainly be embarrassing to have one's information splashed across the internet without their consent, and even non-defamatory statements can bring real costs. However, the high bar of establishing criminal intent would make it difficult to prosecute most behavior that takes place on Tea.
There are reasonable questions about the structure of the app; men claim to have already gotten past the app's gender verification process by posting selfies taken by women, or by using AI to generate photos of themselves as women. (We will leave the issue of the cultural, political, and legal minefield of verifying anyone's gender in 2025 for another day.)
Men who have had the misfortune of appearing on Tea have valid concerns about the conduct it enables. Some of the information that a user might make public on the app – behavior on dates, information about sexually transmitted diseases, even criminal history – is exceptionally sensitive and might have been originally disclosed to another person with the hope that it would be kept private. Human interaction is complex and people all have different standards for what they find objectionable; what one person may interpret as a playful joke, another may interpret as a line-crossing insult worthy of being broadcast to the world. Men have complained that the app's group chat function invites not only discussions of misconduct or safety, but mockery of their appearance, or even the mere fact of their decision to end a relationship at a given time – a right everyone has. What kind of accountability could there be here for information that is posted that is inaccurate or simply hurtful?
Still, one need not strain to recognize the many reasons why an app like Tea was created in the first place. There are reams of data stretching back a generation regarding safety on the internet and in dating as an obvious concern for women. For starters, 2023 data from the Pew Research Center found that women are more likely than men to say that dating apps feel unsafe. In addition, incontrovertible statistics have long documented America's rate of intimate partner violence against women. Statistics show that over a third of women have experienced rape, physical violence, and/or stalking by an intimate partner. W, and women aged 18 to 34 – years in which many women who choose to date might be doing so – experience the highest rates of intimate partner violence. The app largely trended this week due to many women sharing potentially important things the app helped them discover, such as when dates were on sex offender lists or had histories of domestic violence. In light of these realities, concerns from men about the legality of an app like Tea seem really inadequate.
However, perhaps the biggest issue Tea exposes isn't with the law, but with the digital age generally. Much internet communication is largely anonymous and pooled (i.e., visible to all others), which encourages piling on. Internet forums that allow people to air grievances blur the important social and legal line between accountability and punishment. Comment threads, whether on public forums or a closed, women-only dating safety app, welcome, and even invite, vigilante justice. At times, that form of justice may be useful and valid, given the lack of other channels of recourse – whether legal or personal – aggrieved daters may have.
Still, Tea — the app — is not the problem. It is a symptom of a far broader issue:how people share information about each other, and date, in a national climate characterized by profound personal distrust, where women are treated poorly online, and with a ballooning number of platforms that empower individuals to publicize unverified information about others. (Note that the aging writer of this piece met his wife the old fashioned way: on a website.) While doing so can sometimes create legal problems, the biggest concern in all of this is about all of us, not a single app.
Some of the men outed on the app certainly have a basis to feel aggrieved. But for the most part, if they escalate to filing legal challenges, they will likely find themselves crying alone over spilled tea.
ABOUT THE AUTHOR: Elliot Williams is a CNN legal analyst and former deputy assistant attorney general at the Justice Department. He is the author of 'Five Bullets: The Story of Bernie Goetz, New York's Explosive 80s, and the Subway Vigilante Trial That Divided the Nation,' coming from Penguin Press in 2026.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
24 minutes ago
- Yahoo
Geospace Technologies Acquires National Lab Developed Heartbeat Detector Technology
Advanced Analytics Detection Product Adds to the Company's Security Portfolio HOUSTON, August 04, 2025--(BUSINESS WIRE)--Geospace Technologies Corporation (NASDAQ: GEOS) today announced the acquisition of Heartbeat Detector®, a heartbeat detection security technology developed by the United States Department of Energy's Oak Ridge National Laboratory (ORNL). Effective July 31, 2025, Geospace acquired 100 percent of the outstanding shares from GeoVox Security, Inc., the company who first licensed and commercialized the technology from ORNL. Heartbeat Detector® uses a proprietary algorithm developed by ORNL researchers to reveal individuals attempting to hide in vehicles at security checkpoints by detecting a beating heart. Used in more than a dozen countries to address human trafficking and prison security, the Heartbeat Detector® is a small, portable device that uses proprietary sensors to rapidly identify people hidden in vehicles, providing a modern, user-friendly interface in as little as 10 seconds. The product, which relies on GS-ONE LF single-element geophones manufactured by Geospace, has been proven 99% effective by Oak Ridge, Sandia and Thunder Mountain national laboratories. Domestically, the Heartbeat Detector® is used extensively by departments of corrections and prison systems. Globally, the product has been leveraged for border crossings and prisons in Lithuania, Slovenia, Ukraine, Hong Kong, Czech Republic, Spain, France and Germany. The estimated market size for global prison facilities is 10,000 locations. There are more than 300 manned border checkpoint crossings in the United States and more than double that in Europe based on EU estimates. "Our sensing products have a history in border and perimeter security applications. With this acquisition, we are responding to market demand for an accurate, simple, portable technology to combat human trafficking, illegal border crossings, entrance and exit from critical facilities, and prison breaks. The Heartbeat Detector® will align well with our current perimeter security and surveillance offerings in the Intelligent Industrial business segment," said Rich Kelley, CEO of Geospace Technologies. "As part of our long-term growth plan, we have sought out immediately accretive acquisitions to our newly established business segments. We intend to offer the Heartbeat Detector® on a subscription basis to enable the customer base to streamline acquisition through lower upfront costs. This recurring revenue business model is becoming increasingly important in the strategic growth strategy of our company." "It is deeply gratifying to see success in this partnership to commercialize ORNL's heartbeat detection technology," said Jen Caldwell, director of technology transfer at Oak Ridge National Labs. "We strive to develop innovations of lasting market value that make significant contributions in a wide range of applications including safety and security. This heartbeat detection technology has more than achieved this objective, and we look forward to future successes with our partner Geospace, who shares our mission to tackle tough scientific challenges." "Having achieved this sale agreement with Geospace maintains the vision my father had when he licensed the heartbeat detection algorithm from ORNL more than 25 years ago," said Andrew White, GeoVox president and son of former Texas Governor Mark White. "This transition into the capable business model of Geospace will further my father's legacy as the company advances this safety and security technology to the next level." About Geospace TechnologiesGeospace Technologies is a global technology and instrumentation manufacturer specializing in vibration sensing and highly ruggedized products which serve energy, industrial, government and commercial customers worldwide. The Company's products blend engineering expertise with advanced analytic software to optimize energy exploration, enhance national and homeland security, empower water utility and property managers, and streamline electronic printing solutions. With more than four decades of operational excellence, the Company's more than 600 employees across the world are dedicated to engineering and technical quality. Geospace is traded on the U.S. NASDAQ stock exchange under the ticker symbol GEOS. For more information, visit View source version on Contacts Media Contact:Caroline Kempfckempf@ 713-986-8710 Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
24 minutes ago
- Yahoo
A top designer was banned from Dribbble. Now he's building his own competitor.
Dribbble has permanently banned dozens of designers from its platform following a new effort to pivot to a marketplace and chase monetization. This includes one of the platform's most well-known designers, Gleb Kuznetsov, founder of the San Francisco-based design studio Milkinside. Dribbble deleted his account with its over 210 million followers because he shared his contact information with prospective clients through the platform in violation of its new rules. Remarked Kuznetsov in a post on X, 'I brought 100,000+ monthly users. 15 years of work. 12,000+ shots. All instantly deleted, because a client asked for my email. One warning. No appeal.' Fed up with the changes at the company, which helps product, UX, web, and other digital designers showcase their portfolios and find new clients, Kuznetsov says he's been talking to investors about launching a competitor. Shortly after his social media post, Dribbble users expressed their shock and anger over the decision, crediting Kuznetsov as being one of their biggest inspirations and lamenting that the platform would make such a misguided move. Dribbble, meanwhile, says Kuznetsov was actually warned multiple times that he was violating the new rules and the email was the final notice. Dribbble's pivot to a marketplace The issue has to do with a more recent policy change first announced on March 17, 2025. In an email shared in March with Dribbble's some 750,000 approved designers — meaning those who are authorized to communicate with others on the platform — the company said it was no longer allowing designers to share their contact information with prospective clients until after their client sent payment through its platform. The company positioned this change as one meant to protect designers from non-payment, as well as one that allows Dribbble to continue to sustain its business. The announcement was also posted to social media and the company blog. However, Kuznetsov claims that non-payment isn't a very common problem, and really, this update is about Dribbble attempting to take a larger cut of designers' business. Dribbble doesn't dispute that. Before the policy change, Dribbble made money in one of two ways. Starting in September 2024, Dribbble began pivoting to a marketplace that connected designers and clients. Designers could communicate freely on the platform and then either share a 3.5% revenue cut on clients they converted, or they could pay for a Pro subscription to skip the rev share. In March, the company tightened the rules further, saying that anyone finding clients on Dribbble would need to offer the platform a cut of their revenue. 'It went from it was optional to use our transactional features to it was required for non-advertisers to use our transactional features, if they were on Dribbble, to find clients,' explains Dribbble CEO Constantine Anastasakis, in an interview with TechCrunch. 'If a user is on Dribbble to find inspiration or to get feedback on their work, or to talk shop with their peers, none of this affects them,' he added. The exec, who joined the company after working at direct-to-consumer lender Lower, video marketplace Pond5 (exited to Shutterstock), and freelancer marketplace Fiverr, was hired last April to pivot Dribbble into a marketplace. While the company is profitable under parent company Tiny, it's still a small 20-person team and isn't reliant on venture backing to serve its 7.5 to 10 million monthly unique visitors. 'Dribbble was something that really accelerated our business dramatically back in the day,' Kuznetsov told TechCrunch. Before Dribbble, there was no platform where designers could share their work wth others, he says. It helped designers receive feedback that came specifically from their peers and allowed newer designers to learn from those at the top of the industry. Kuznetsov is now part of the latter group. At Milkinside, Kuznetsov has worked with companies like Apple, Google, Amazon, Scandinavian Airlines, United Airlines, Honda, Mitsubishi, Mercedes-Benz, and other large companies in the Bay Area. As a result, he likely didn't feel that Dribbble would risk banning him for not abiding by the new terms. Anastasakis essentially confirmed this to be true. He told TechCrunch that Kuznetsov received 83 work inquiries since the new terms rolled out in March, and responded to 61. In each message, the site shows a warning that reminds users that contact details should not be shared before project payment. However, Kuznetsov shared his contact information in six messages, which would have displayed a stronger warning at that time. The company then followed up with a warning email on July 22 about his repeated terms-of-service violations, which informed him he was risking permanent suspension. Kuznetsov told us he didn't see this email initially, but Dribbble says it tracked that the email was opened three times before his suspension. 'I believe that Dribbble — it was their goal to hurt me so I can spread that [news] so they can give a harsh lesson to everyone who tries [to break the rules],' Kuznetsov says. Dribbble's CEO Anastasakis confirmed as much to TechCrunch. 'There's there's really no conceivable way in which he did not realize that what he was doing risked permanent suspension of his accounts,' Anastasakis told us. 'I think that ultimately it was that he believed that we wouldn't take action against a designer of his caliber,' he continued. 'As a side note, I actually think that he's done us a big favor as far as getting the word out about how seriously we take the terms.' For Kuznetsov, or any designer who was banned for similar reasons, the only option to come back to Dribbble is by joining as an advertiser, which requires a minimum campaign budget of $1,500 per month for at least three months. A new competitor to Dribbble emerges? Kuznetsov has decided to forge his own path, saying that he's hurt by Dribbble's change. 'It's not going to be a copycat of Dribbble,' he says of his pending startup. Instead, it will be a resource for designers that will also leverage AI. While there has been a lot of backlash about AI models training on creatives' work without compensation, Kuznetsov believes there's a use case for the technology in terms of inspiration, creation, and design. 'It's a big hole right now in the market…Everybody's doing AI startups, but nobody's really doing AI startups for designers,' Kuznetsov notes. 'AI is something that really can elevate our ability to create, and make it on a much higher level of quality. It's going to help us to not only earn more money and grow, but also create something we never even thought was possible to create without a specific skill set.' Kuznetsov says he expects to have an MVP (minimum viable product) ready in three or four months. However, he notes the goal is not to 'kill' Dribbble, even though investors offered him money to do so. 'It's not like that. I'm trying to do something good for the community because I'm a designer. So I know how painful it is to be a designer in this world,' says Kuznetsov. 'We need to be really smart about how we invest our time — how we give our best and give our life to other platforms. Diversification of that investment should be something that everyone should be thinking about,' he adds.
Yahoo
24 minutes ago
- Yahoo
Why SES AI Stock Surged 13% Higher Today
Key Points It regained compliance with New York Stock Exchange listing requirements. These stipulate that a stock must trade at a trailing average of $1 per share. 10 stocks we like better than Ses Ai › One trading day can make quite a difference in the life of a stock. This was well on display Monday with next-generation electric vehicle (EV) battery developer SES AI (NYSE: SES). The company's share price leaped more than 13% skyward on some good news with its stock listing. That advance was more than strong enough to crush the 1.5% increase of the bellwether S&P 500 index. An electric announcement SES AI disclosed in a regulatory filing Monday, no doubt with immense relief, that it received notice from the New York Stock Exchange that it had regained compliance with the bourse's listing requirement. Specifically, this mandates that a stock on the exchange must trade for about $1 per share for a 30-day period. In the tersely worded document, SES AI said NYSE informed the company on Friday, citing the 30-day stretch that ended on July 31. Although SES AI is a relatively early-stage company, it has lately been drawing revenue and attracting notice for its cutting-edge battery solutions. However, the growth of EV sales isn't as robust as it once was, and the company has posted a string of bottom-line losses. Growth by other means Yet SES AI is also a company on the move; in recent days, it has shown an appetite for growing through acquisitions. It concluded a deal to purchase energy storage systems (ESS) developer UZ Energy for a price of around $25.5 million. That acquisition could prove to be rather complementary to its business, assuming it is integrated effectively and efficiently into its operations. Should you buy stock in Ses Ai right now? Before you buy stock in Ses Ai, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Ses Ai wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $624,823!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,064,820!* Now, it's worth noting Stock Advisor's total average return is 1,019% — a market-crushing outperformance compared to 178% for the S&P 500. Don't miss out on the latest top 10 list, available when you join Stock Advisor. See the 10 stocks » *Stock Advisor returns as of August 4, 2025 Eric Volkman has no position in any of the stocks mentioned. The Motley Fool has no position in any of the stocks mentioned. The Motley Fool has a disclosure policy. Why SES AI Stock Surged 13% Higher Today was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data