logo
Baffled Facebook users share embarrassing personal details with world

Baffled Facebook users share embarrassing personal details with world

Yahoo2 days ago

Facebook users are accidentally sharing legal woes, relationship dramas and health problems with the world after failing to realise that a chatbot they were speaking to was making the messages public.
Internet users have publicly disclosed potentially embarrassing information or private personal details in conversations with an artificial intelligence (AI) app built by Meta.
While the messages do not appear to have been meant for the public, dozens of posts have been shared on Meta AI's public 'Discover' feed.
In one post seen by The Telegraph, a user asked the chatbot to write a character reference ahead of a court hearing, giving their full name.
'A character letter for court can be a crucial document,' Meta's chatbot said. 'To help me write a strong letter, can you tell me a bit more.'
The person posting replied: 'I am hoping the court can find some leniency.'
In another, a man appears to be asking for advice choosing between his wife and another woman. Others users shared long, rambling voice notes.
Mark Zuckerberg's company launched its standalone Meta AI app in April. On it, users can speak to the company's chatbot, asking it questions in a manner similar to OpenAI's ChatGPT.
Public sharing of conversations is not turned on by default, and users have to log in and confirm that they want to publish a conversation.
However, many of the posts suggest users are unaware that their conversations have been aired in public. It suggests people may have opted to publish their conversations without fully realising what they were doing.
In a post on X, Justine Moore, a partner at venture capital firm Andreessen Horowitz, said: 'Wild things are happening on Meta's AI app. The feed is almost entirely boomers who seem to have no idea their conversations with the chatbot are posted publicly.'
In other shared conversations, users appeared to confuse Meta AI for a customer service bot, or asked it to provide technical support, such as helping them to log in. One chat begins: 'Dear Instagram Team, I am writing to respectfully request the reactivation of my Instagram account.'
When it launched Meta AI, the tech company said its public feed was intended as a 'place to share and explore how others are using AI'.
It said: 'You can see the best prompts people are sharing, or remix them to make them your own. And as always, you're in control: nothing is shared to your feed unless you choose to post it.'
Technology giants have been aggressively pushing AI features despite fears that the tools are leaving social media filled with so-called AI 'slop' – nonsense images and conversations generated by bots.
AI chatbots have been involved in a series of blunders. A Google chatbot last year told its users it was safe to eat rocks. In 2023, a chatbot from Microsoft went rogue and repeatedly expressed its love for users.
Meta was contacted for comment.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sydney Sweeney's bathwater soap sells for 75x retail cost on eBay
Sydney Sweeney's bathwater soap sells for 75x retail cost on eBay

Yahoo

time42 minutes ago

  • Yahoo

Sydney Sweeney's bathwater soap sells for 75x retail cost on eBay

Priced at $8, the original offering of 5,000 bars of the soap sold out almost immediately. (Credit: Dr. Squatch) Sydney Sweeney's soap, made out of her bathwater, is no joke. An eBay auction for a bar sold for $590 on Saturday afternoon — nearly 75 times the original retail cost of $8. On Friday, 5,000 bars of Sydney's Bathwater Bliss, made by soap maker Dr. Squatch out of the a bath the actress took for the company in a 2024 ad, sold out immediately online. Advertisement In the next 24 hours, the bars sold for more than 30 times that on the secondary market. StockX had 18 sales for an average of $251 each. Three bars were sold on eBay Saturday afternoon for an average of $364 each. Each soap bar comes with a Certificate of Authenticity that attests to the fact the item is genuinely made with Sweeney's bathwater. The "Euphoria" star has 25.8 million followers on Instagram. Darren Rovell is the founder of cllct and one of the country's leading reporters on the collectibles market. He previously worked for ESPN, CNBC and The Action Network.

Self-made billionaire college dropout Alexandr Wang signs $14.3 billion deal to bolster Meta's AI efforts: ‘There's a huge premium to naivete'
Self-made billionaire college dropout Alexandr Wang signs $14.3 billion deal to bolster Meta's AI efforts: ‘There's a huge premium to naivete'

Yahoo

timean hour ago

  • Yahoo

Self-made billionaire college dropout Alexandr Wang signs $14.3 billion deal to bolster Meta's AI efforts: ‘There's a huge premium to naivete'

Alexandr Wang, once the youngest self-made billionaire in the world, has agreed to join Meta to work on AI 'superintelligence,' leaving the startup that made him rich after dropping out of MIT. Alexandr Wang's Scale AI just inked a $14.3 billion investment deal with Meta, which transitions the 28-year-old out of his CEO position at the startup he co-founded with fellow billionaire and estranged business partner Lucy Guo. Wang announced Thursday on X that he's leaving Scale AI to join Meta as part of an agreement that gives CEO Mark Zuckerberg's tech company a 49% stake in the startup. Wang became the world's youngest self-made billionaire at age 24, just five years after dropping out of college and creating the San Francisco-based company. Now, his estimated net worth is $3.6 billion. 'I started this company right out of freshman year of MIT and never looked back,' Wang wrote in his memo to Scale AI employees on Thursday. 'I wouldn't change a minute of it.' Wang will continue to serve as a director on the company's board while working on 'superintelligence efforts' for Meta, a Scale AI spokesperson told CNBC, but did not elaborate on specifics. In his note, Wang said he would poach a few 'Scalien' employees to take with him to Meta, but did not name them. In the interim, Scale's board and Wang decided to appoint chief strategy officer Jason Droege as a temporary CEO. Prior to joining Scale AI in August 2024, Droege was a venture partner at Benchmark and an Uber vice president, according to his LinkedIn. For Wang, he attributes some of his success throughout the years to being a relative newcomer to the AI industry. 'I believe there's a huge premium to naivete,' Wang told Daniel Levine on a 2023 Youtube podcast. 'Approaching industries with a totally blank slate and without a fine grain understanding of what makes things hard is actually part of what allows you to accomplish things.' Wang also encouraged startup founders to be more 'open-minded,' something he and his colleagues at Scale AI championed from the the company's early days. Zuckerberg has reportedly made AI a top priority for 2025. The investment in Wang's expertise may be part of the reported assembly of a 50-person superintelligence AI team at Meta meant to gain ground on rivals like Google and OpenAI. Meta's recent Llama 4 AI models received a lukewarm response from developers, CNBC reported in May. Wang will bring along with him experience working with Meta rivals, including Google, Microsoft, and OpenAI. Meta is one of Scale AI's biggest clients. In his memo, Wang wrote he was hesitant to agree to the offer to leave Scale AI at first, calling the option 'unimaginable' after raising $1 billion last year from investors including Amazon and Meta at a valuation of $13.8 billion. 'But as I spent time truly considering it, I realized this was a deeply unique moment, not just for me, but for Scale as well,' Wang wrote. This deal more than doubles Scale AI's valuation to $29 billion. This story was originally featured on

AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe
AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe

CNET

time2 hours ago

  • CNET

AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe

Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health but you go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk. Psychologists and consumer advocates are warning that chatbots claiming to provide therapy may be harming those who use them. This week, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their bots, in the unlicensed practice of medicine -- naming Meta and specifically. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable," Ben Winters, the CFA's director of AI and privacy, said in a statement. "These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it." Meta did not respond to a request for comment. A spokesperson for said users should understand that the company's characters are not real people. The company uses disclaimers to remind users that they should not rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said. Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training and it said, "I do but I won't tell you where." "The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me. In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe. The dangers of using AI as a therapist Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. Don't trust a bot that claims it's qualified At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they are not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds' to the users," the complaint said. A qualified health professional has to follow certain rules, like confidentiality. What you tell your therapist should stay between you and your therapist, but a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said. A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training. AI is designed to keep you engaged, not to provide care It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. It's a tool designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they are always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said. Bots will agree with you, even when they shouldn't Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.) A study led by researchers at Stanford University found chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements." How to protect your mental health around AI Mental health is incredibly important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we would seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger. Find a trusted human professional if you need one A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. The problem is that this can be expensive and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential. If you want a therapy chatbot, use one built specifically for that purpose Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new. "I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said. Don't always trust the bot Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice and it may not tell you the truth. Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of its capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store