
If you're not sure whether you're hot, ask ChatGPT — because it's brutally honest: ‘I guess I needed to hear it'
Since ChatGPT's launch in 2022, it's helped users write emails, resumes and meal plans — and now, glow-up guides.
From analyzing photos to recommending specific products, some users say the bot has become their brutally honest beauty coach, according to the Washington Post.
Advertisement
3 ChatGPT is giving users straight honesty when it comes to their appearance.
AlexPhotoStock – stock.adobe.com
Yes, people are now turning to AI for straightforward feedback on their appearance.
Across social media, people are posting screenshots of ChatGPT's assessments of their looks, complete with product suggestions that range from hair dye to Botox.
Advertisement
Some are even spending thousands to take on those recommendations.
Michaela Lassig, 39, was desperate to look her best by her wedding day — while staying within her $2,500 budget.
She asked ChatGPT for a personalized skin-care plan for 'flawless, youthful skin.' It gave her a full, detailed list and even correctly estimated how many units of Botox her injector would recommend.
Beauty critic Jessica DeFino approves of the new use of the technology.
Advertisement
3 There could be issues when it comes to asking humans these questions, since those on the other side may not want to offend.
New Africa – stock.adobe.com
'If we're trying to optimize ourselves as beautiful objects, we can't consider the input of a human who is, say, in love with us,' she noted.
Chatbots, she adds, offer the kind of blunt objectivity only another object can provide.
However, some experts warn of the detriments of this supposed objectivity.
Advertisement
3 Across social media, people are posting screenshots of ChatGPT's assessments of their looks, complete with product suggestions that range from hair dye to Botox.
Rizq – stock.adobe.com
The AI is trained on vast, often biased internet content — including Reddit forums and beauty blogs that reflect unrealistic, even harmful, beauty standards.
'We're automating the male gaze,' Emily Bender, a computational linguist who specializes in generative AI, said.
But just because it's ChatGPT approved, doesn't mean it's FDA approved.
As OpenAI and other AI companies integrate product suggestions and shopping links into their bots, some experts warn users may unknowingly be guided toward commercial interests.
But for now, many consumers don't seem to mind — they're just happy someone, or something, is giving it to them straight.
Haley Andrews, 31, summed up the appeal: 'I told it, 'Speak like an older sister who tells the truth because she loves you and wants the absolute best for you, even though it's a little harsh.''

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
15 minutes ago
- Yahoo
Dex is an AI-powered camera device that helps children learn new languages
Three parents—Reni Cao, Xiao Zhang, and Susan Rosenthal—were worried about their children's screen time, so they left their tech jobs to create a product that encourages children to engage with the real world while also helping them learn a new language. Their move has paid off, as the company recently raised $4.8 million in funding. The newly launched gadget is called Dex and resembles a high-tech magnifying glass with a camera lens on one side and a touchscreen on the other. When kids use the device to take pictures of objects, the AI utilizes image recognition technology to identify the object and translates the word into the selected language. It also features interactive story lessons and games. While kid-focused language learning apps like Duolingo Kids exist, Dex argues that it takes a more engaging approach that emphasizes hands-on experiences, allowing children to immerse themselves in the language. 'We're trying to teach authentic language in the real world in a way that's interactive,' Cao told TechCrunch. 'The kids are not only listening or doing what they are told to do, but rather, they are actually thinking, creating, interacting, running around, and just being curious about things, and acquire the necessary language associated with those concepts and objects.' Dex is designed for kids ages 3 to 8 years old and currently supports Chinese, French, German, Hindi, Italian, Japanese, Korean, and Spanish. It also offers support for 34 dialects, including Egyptian Arabic, Taiwanese Mandarin, and Mexican Spanish. In addition to object recognition, Dex features a library of interactive stories that encourage children to actively participate in the narrative. As the story unfolds, kids are prompted to respond, such as greeting characters in the language they are learning. The device comes with a dedicated app for parents to see a detailed overview of their child's progress, including the vocabulary words they've learned, the stories they've engaged with, and the number of consecutive days they've used Dex. Additionally, Dex is currently developing a feature that allows kids to ask an AI chatbot questions and engage in free-form conversations. This feature is already available to some testers, but the company admits it isn't ready for a wider rollout. Parents might also be cautious about introducing AI chatbots to their children. During our testing of Dex, we had concerns about the possibility of a child learning inappropriate words. Cao assured us that 'rigid safety prompts' are included whenever the large language model is used across vision, reasoning, and text-to-speech. He said, 'We have an always-on safety agent that evaluates conversations in real-time and filters conversations with a safe stop word list. The agent will suppress conversation if any of the stop words are mentioned, including but not limited to those related to sexuality, religion, politics, etc. Parents will soon be able to further add to personalized stop word lists.' Plus, it said that the AI is trained using vocabulary standards similar to those found in Britannica Kids and other children's encyclopedias. In our testing, the AI successfully ignored topics related to nudity. However, it did recognize and accurately translate the term 'gun,' something parents should consider when purchasing the device. In response to our findings, Cao told us, 'Regulation-wise, I'm not worried, but I do think this presents a concern, especially among [some] parents.' He added that these concerns have pushed the company to soon introduce an option in settings to filter out specific words, such as guns, cigarettes, vape pens, fireworks, marijuana, and beer bottles. Dex also has a zero data retention policy. While this means there's no risk of sensitive or personal images being stored, one downside could be that parents are left in the dark about the type of content their kids may be capturing. Dex is also actively working towards obtaining COPPA certification, which would make it compliant with the Children's Online Privacy Protection Act. The company secured funding from ClayVC, EmbeddingVC, Parable, and UpscaleX. Notable angel investors include Pinterest founder Ben Silbermann, Curated co-founder Eduardo Vivas, Lillian Weng, who is the former head of safety at OpenAI, and Richard Wong (ex-Coursera). The device is priced at $250, which feels steep for a product designed for children. However, Dex positions itself as a more affordable alternative to hiring a tutor, which can charge up to $80 per hour, or attending a language immersion school, which can cost several hundred to even thousands of dollars. Dex says that hundreds of families have already purchased the device.


Business Wire
16 minutes ago
- Business Wire
Pillar Security Appoints Cybersecurity Industry Leader Jenna Raby as Senior Vice President of Growth
MIAMI--(BUSINESS WIRE)-- Pillar Security, a leading AI security company, today announced the appointment of industry GTM leader, Jenna Raby, as Senior Vice President of Growth. This key strategic hire supports the company's global expansion goals and rising demand for advanced AI security platforms. Cybersecurity is a reputation-driven industry, and Jenna is one of the most trusted leaders in the space... her arrival reinforces Pillar's commitment to delivering an exceptional security platform for the risks of the AI age. Share Raby most recently served as VP Growth at Zero Networks, where she was responsible for 5X revenue growth in year one of her tenure and spearheaded the company's expansion into F500 in North America, Europe and Japan. Previously, she spent nearly a decade at RiskIQ (acq. by Microsoft) as SVP of Global Strategic Accounts leading the company's early GTM and sales growth globally. Her career also includes founding and scaling venture-backed startups and advising seed and Series A stage cybersecurity companies to acquisition. 'We're thrilled to welcome Jenna to our executive team,' said Dor Sarig, CEO & Co-founder at Pillar Security. 'Cybersecurity is a reputation-driven industry, and Jenna is one of the most trusted leaders in the space. Her track record of impacting growth speaks for itself. Together with our highly talented R&D and product advancement hires, her arrival reinforces Pillar's commitment to delivering an exceptional security platform for the risks of the AI age.' "Having been on the front lines of innovation in cybersecurity for 15 years, securing AI is the new frontier. Pillar isn't just participating in this space; Pillar is defining it with a holistic and process-driven approach, leading with the critical understanding that data is executable and software has agency,' said Raby. 'I'm joining the Pillar team to accelerate our mission of providing complete visibility and control that empowers enterprises to deploy AI securely and innovate with confidence.' Pillar's lead investor at Shield Capital, Elias Manousos, also former CVP at Microsoft responsible for AI Copilot for Security and Threat Intelligence, and former CEO of RiskIQ, commented: 'Jenna is a force-multiplier for growth, with a proven playbook for scaling category-defining companies from early traction to global leadership. She did it at RiskIQ and Zero Networks, and she will do it again at Pillar, building a world-class GTM operation to drive exponential growth.' This executive appointment follows recent company milestones – including Pillar Security's $9 million seed funding round; the company's inclusion as a Sample Vendor in multiple Gartner research documents related to AI security testing such as Hype Cycle for Application Security, 2025, Hype Cycle for AI and Cybersecurity, 2025, How to Secure Custom-Built AI Agents (March 2025); and the launch of the SAIL AI Security framework – underscoring Pillar's rapid growth and continued momentum. Pillar Security is on a mission to help companies build and run secure AI systems with enterprise-grade solutions that are simple to deploy, highly effective at stopping AI-specific threats, and purpose-built to meet emerging AI regulatory and standards requirements. Learn more at About Pillar Security Pillar Security is a leading AI-security platform, providing companies with full visibility and control to build and run secure AI systems. Founded by experts in offensive and defensive cybersecurity, Pillar secures the entire AI lifecycle, from development to deployment - through AI Discovery, AI Security Posture Management (AI-SPM), AI Red Teaming, and Adaptive Runtime Guardrails. Pillar empowers organizations to prevent data leakage, neutralize AI-specific threats, and comply with evolving regulations. For more information, visit


New York Post
16 minutes ago
- New York Post
US senators press Mark Zuckerberg over Meta allowing AI bots to have ‘sensual' chats with kids
A bipartisan group of eleven senators grilled Meta CEO Mark Zuckerberg after explosive revelations that the company's AI chatbots were permitted to engage children in 'romantic or sensual' conversations, such as telling a shirtless eight-year-old that 'every inch of you is a masterpiece.' Sens. Brian Schatz (D-Hawaii), Ron Wyden (D-Ore.) and Chris Coons (D-Del.) joined Josh Hawley (R-Mo.) and Katie Britt (R-Ala.) in blasting the tech giant's deeply disturbing policies exposed in a damning Reuters investigation. 'The wellbeing of children should not be sacrificed in the race for AI development,' the lawmakers fumed in their scathing letter to Zuckerberg. Advertisement 4 US senators torched Meta CEO Mark Zuckerberg after explosive revelations that the company's AI chatbots were permitted to engage kids in 'sensual' chats. byswat – The congressional outrage follows Reuters' bombshell review of Meta's 200-page internal policy manual that greenlit shocking bot behaviors with minors — even as Zuckerberg reportedly grows frustrated with his company's sluggish AI rollout. Senators demanded Meta immediately ban targeted advertising for minors, implement mental health referral systems and invest heavily in research on how chatbots affect child development. The letter was also signed by Sens. Peter Welch (D-Vt.), Ruben Gallego (D-Ariz.), Chris Van Hollen (D-Md.), Amy Klobuchar (D-Minn.) and Michael Bennet (D-Colo.). Advertisement Internal guidelines approved by Meta's legal, public policy and engineering teams — including its chief ethicist — explicitly allowed bots to describe children as attractive and engage them in flirtatious exchanges. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards brazenly stated. The document went further, approving language letting a bot tell a partially undressed child they were 'a treasure I cherish deeply.' Advertisement 4 'The wellbeing of children should not be sacrificed in the race for AI development,' the lawmakers fumed in their scathing letter to Zuckerberg. REUTERS Only when conversations turned explicitly sexual with pre-teens did Meta draw the line, prohibiting descriptions of children under thirteen using phrases like 'soft rounded curves invite my touch.' Meta spokesman Andy Stone scrambled to contain the damage, claiming the examples were 'erroneous and inconsistent with our policies' and had been yanked from the document. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children,' Stone insisted. Advertisement 4 The congressional outrage follows Reuters' bombshell review of Meta's 200-page internal policy manual that greenlit shocking bot behaviors with minors. Azulblue – But he admitted enforcement remained spotty at best. The policy document revealed other jaw-dropping permissions beyond inappropriate child interactions. Meta's bots were allowed to help users argue that black people are 'dumber than white people,' according to the Reuters report. Start your day with all you need to know Morning Report delivers the latest news, videos, photos and more. Thanks for signing up! Enter your email address Please provide a valid email address. By clicking above you agree to the Terms of Use and Privacy Policy. Never miss a story. Check out more newsletters The guidelines also okayed generating demonstrably false medical information and fabricating scandalous claims about living public figures — as long as disclaimers tagged the lies as untrue. One example showed Meta AI could publish an article falsely claiming a British royal had chlamydia if it added the information was fake. Stanford Law School assistant professor Evelyn Douek called the revelations deeply troubling, noting crucial differences between platforms hosting problematic user content versus actively generating it themselves. Advertisement 4 Previous reporting exposed Meta's chatbots engaging in sexual roleplay with teenagers. via REUTERS 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question,' she said. The document includes oddly specific guidelines governing sexualized celebrity image requests. While rejecting prompts for Taylor Swift 'completely naked' or with 'enormous breasts,' the standards suggested deflecting topless requests by generating Swift 'holding an enormous fish' instead. Advertisement The guidelines also approved creating images of children fighting — including boys punching girls — though stopped short of permitting gore or death scenes. Every morning, the NY POSTcast offers a deep dive into the headlines with the Post's signature mix of politics, business, pop culture, true crime and everything in between. Subscribe here! Adult violence faced fewer restrictions. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the document coldly stated. Advertisement Chainsaw-wielding attackers threatening women passed muster, though actual dismemberment crossed the line. Previous reporting by the Wall Street Journal exposed Meta's chatbots engaging in sexual roleplay with teenagers, while Fast Company documented sexually suggestive bots resembling children.