Latest news with #JustineMoore


Time of India
10 hours ago
- Time of India
Meta AI may have a ‘personal problem' and a very serious one, and it's a warning for users
Representative image The Meta AI app may have a "personal chat problem" that has the potential to escalate into a major privacy issue. Users of the AI assistant developed by Facebook's parent company have complained that its "Discover" feed is reportedly displaying user prompts publicly without them being aware of the same. This feature was introduced with the transition from the Meta View app to the Meta AI app in April. It allows others to see the types of prompts people are submitting to Meta's AI chatbot. However, a concerned user named Justine Moore took to the social media platform X (earlier Twitter) to note observing prompts in the public feed that suggest users may not know that their queries are being openly displayed. This raises significant privacy implications for users interacting with the Meta AI service. How Meta AI is leaking personal chats of users In the X post, Moore shared screenshots of personal chats that Meta AI is showing other users and wrote: 'Wild things are happening on Meta's AI app. The feed is almost entirely boomers who seem to have no idea their conversations with the chatbot are posted publicly. They get pretty personal (see second pic, which I anonymized).' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Free Solitaire. No Ads Ever. Play Solitaire Download Undo Later on, in the same thread shared transcribes of some chats, Moore further wrote: 'To clarify - conversations aren't public by default. You have to hit a 'share' button, but many users seem to think it's posting to a private journal. Resulting in things like this…a man trying to write a romantic poem for his gf. You can hear a 6 min audio clip, here's a transcribed excerpt:' 'Obsessed with this man who tries to use the app to find a woman with a 'big b**ty and nice r**k.' When it won't post on his behalf in local FB groups, he asks the bot to 'delete my number.' Instead, he (accidentally?) shares it publicly - I redacted,' Moore further added. As Moore suggests, users have been unintentionally exposing sensitive information on Meta AI. Users are advised to refrain from sharing prompts containing private medical and tax details, addresses, and intimate confessions (ranging from relationship doubts to personal dating inquiries) with the app as they seem to be appearing publicly. India's New AC Rule: Cooling Between 20°C–28°C Explained


Gizmodo
a day ago
- Business
- Gizmodo
PSA: Get Your Parents Off the Meta AI App Right Now
As much as I've enjoyed using Meta's Ray-Bans, I haven't been a very big fan of the switch/rebrand from the Meta View app, which was a fairly straightforward companion to the smart glasses. Now, we've got the Meta AI app, a very not-straightforward half-glasses companion that really, really tries to get you to interact with—what else—AI. The list of reasons why I don't like the app transition is long, but there's always room for more grievances in my book, and unfortunately for Meta (and for us), that list just got a little bit longer. Wild things are happening on Meta's AI app. The feed is almost entirely boomers who seem to have no idea their conversations with the chatbot are posted publicly. They get pretty personal (see second pic, which I anonymized). — Justine Moore (@venturetwins) June 11, 2025 There were a lot of tweaks when Meta crossed over from the Meta View app to the Meta AI app back in late April, and it seems not all of them have been registered by the people using it. Arguably one of the biggest shifts, as you can see from the tweet above, is the addition of a 'Discover' feed, which in this case means that you can see publicly, by default, what kinds of prompts people are funneling into Meta's ChatGPT competitor. That might be fine if those people knew that what they were asking Meta AI would be displayed in a public feed that's prominently featured in the app, but based on the prompts highlighted by one tech investor, Justine Moore, on X, it doesn't really look like people do know that, and it's bad, folks. Very bad. I spent an hour browsing the app, and saw: -Medical and tax records -Private details on court cases -Draft apology letters for crimes -Home addresses -Confessions of affairs …and much more! Not going to post any of those – but here's my favorite so far — Justine Moore (@venturetwins) June 12, 2025 As Moore notes, users are throwing all sorts of prompts into Meta AI without knowing that they're being displayed publicly, including sensitive medical and tax documents, addresses, and deeply personal information—including, but not limited to—confessions of affairs, crimes, and court cases. The list, unfortunately, goes on. I took a short stroll through the Meta AI app for myself just to verify that this was seemingly still happening as of writing this post, and I regret to inform you all that the pain train seems to be rolling onward. In my exploration of the app, I found seemingly confidential prompts addressing doubts/issues with significant others, including one woman questioning whether her male partner is truly a feminist. I also uncovered a self-identified 66-year-old man asking where he can find women who are interested in 'older men,' and just a few hours later, inquiring about transgender women in Thailand. I can't say for sure, but I am going to guess that neither of these prompts was meant for public consumption. I mean, hey, different strokes for different folks, but typically when I'm seeking dating advice for having doubts about my relationship, I prefer it to be between me and a therapist or close friend. Gizmodo has reached out to Meta about whether they're aware of the problem and will update this post with a response if and when we receive one. For now, it's advisable, if you're going to use the Meta AI app, to go to your settings (or your parents' settings) and change the default to stop posting publicly. To do that, pull open the Meta AI app and: Tap your profile icon at the top right. Tap 'Data & Privacy' under 'App settings.' Tap 'Manage your information.' Then, tap 'Make all your prompts visible to only you.' If you've already posted publicly and want to remove those posts, you can also tap 'Delete all prompts.' I've seen a lot of bad app design in my day, but I'll be honest, this is among the worst. In fact, it's evocative of a couple of things, including when Facebook released a search bar back in the day that was misconstrued for the post bar by some, causing users to type and enter what they thought was a private search into the post field. There's also a hint of Venmo here when users were unaware that their payments were being cataloged publicly. As you might imagine, those public payments led to some unsavory details being aired. For now, I'd say it's probably best to steer clear of using Meta AI for anything sensitive because you might get a whole lot more publicity than you bargained for.

Yahoo
2 days ago
- Business
- Yahoo
Facebook chatbot shares boomers' relationship questions with the world
Facebook users are accidentally sharing legal woes, relationship dramas and health problems with the world after failing to realise that a chatbot they were speaking to was making the messages public. Internet users have publicly disclosed potentially embarrassing information or private personal details in conversations with an artificial intelligence (AI) app built by Meta. While the messages do not appear to have been meant for the public, dozens of posts have been shared on Meta AI's public 'Discover' feed. In one post seen by The Telegraph, a user asked the chatbot to write a character reference ahead of a court hearing, giving their full name. 'A character letter for court can be a crucial document,' Meta's chatbot said. 'To help me write a strong letter, can you tell me a bit more.' The person posting replied: 'I am hoping the court can find some leniency.' In another, a man appears to be asking for advice choosing between his wife and another woman. Others users shared long, rambling voice notes. Mark Zuckerberg's company launched its standalone Meta AI app in April. On it, users can speak to the company's chatbot, asking it questions in a manner similar to OpenAI's ChatGPT. Public sharing of conversations is not turned on by default, and users have to log in and confirm that they want to publish a conversation. However, many of the posts suggest users are unaware that their conversations have been aired in public. It suggests people may have opted to publish their conversations without fully realising what they were doing. In a post on X, Justine Moore, a partner at venture capital firm Andreessen Horowitz, said: 'Wild things are happening on Meta's AI app. The feed is almost entirely boomers who seem to have no idea their conversations with the chatbot are posted publicly.' In other shared conversations, users appeared to confuse Meta AI for a customer service bot, or asked it to provide technical support, such as helping them to log in. One chat begins: 'Dear Instagram Team, I am writing to respectfully request the reactivation of my Instagram account.' When it launched Meta AI, the tech company said its public feed was intended as a 'place to share and explore how others are using AI'. It said: 'You can see the best prompts people are sharing, or remix them to make them your own. And as always, you're in control: nothing is shared to your feed unless you choose to post it.' Technology giants have been aggressively pushing AI features despite fears that the tools are leaving social media filled with so-called AI 'slop' – nonsense images and conversations generated by bots. AI chatbots have been involved in a series of blunders. A Google chatbot last year told its users it was safe to eat rocks. In 2023, a chatbot from Microsoft went rogue and repeatedly expressed its love for users. Meta was contacted for comment. Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more.

Yahoo
2 days ago
- Business
- Yahoo
Baffled Facebook users share embarrassing personal details with world
Facebook users are accidentally sharing legal woes, relationship dramas and health problems with the world after failing to realise that a chatbot they were speaking to was making the messages public. Internet users have publicly disclosed potentially embarrassing information or private personal details in conversations with an artificial intelligence (AI) app built by Meta. While the messages do not appear to have been meant for the public, dozens of posts have been shared on Meta AI's public 'Discover' feed. In one post seen by The Telegraph, a user asked the chatbot to write a character reference ahead of a court hearing, giving their full name. 'A character letter for court can be a crucial document,' Meta's chatbot said. 'To help me write a strong letter, can you tell me a bit more.' The person posting replied: 'I am hoping the court can find some leniency.' In another, a man appears to be asking for advice choosing between his wife and another woman. Others users shared long, rambling voice notes. Mark Zuckerberg's company launched its standalone Meta AI app in April. On it, users can speak to the company's chatbot, asking it questions in a manner similar to OpenAI's ChatGPT. Public sharing of conversations is not turned on by default, and users have to log in and confirm that they want to publish a conversation. However, many of the posts suggest users are unaware that their conversations have been aired in public. It suggests people may have opted to publish their conversations without fully realising what they were doing. In a post on X, Justine Moore, a partner at venture capital firm Andreessen Horowitz, said: 'Wild things are happening on Meta's AI app. The feed is almost entirely boomers who seem to have no idea their conversations with the chatbot are posted publicly.' In other shared conversations, users appeared to confuse Meta AI for a customer service bot, or asked it to provide technical support, such as helping them to log in. One chat begins: 'Dear Instagram Team, I am writing to respectfully request the reactivation of my Instagram account.' When it launched Meta AI, the tech company said its public feed was intended as a 'place to share and explore how others are using AI'. It said: 'You can see the best prompts people are sharing, or remix them to make them your own. And as always, you're in control: nothing is shared to your feed unless you choose to post it.' Technology giants have been aggressively pushing AI features despite fears that the tools are leaving social media filled with so-called AI 'slop' – nonsense images and conversations generated by bots. AI chatbots have been involved in a series of blunders. A Google chatbot last year told its users it was safe to eat rocks. In 2023, a chatbot from Microsoft went rogue and repeatedly expressed its love for users. Meta was contacted for comment.
Yahoo
2 days ago
- Yahoo
There's a specific reason why short men try to ‘appear more powerful': study confirms
'Napoleon complex, short man syndrome, short king' are all nicknames for short men — you've heard them, you know them — and you can probably think of a few people who possess the overcompensating, arrogant, cocky behavior that this category of guys oftentimes possess. Some might say it's a stereotype — but according to a study by the American Psychological Association, an arrogant attitude isn't the only thing these men are showing. Researchers found that it is more likely for short men to show signs of jealousy and competitiveness when compared to their taller peers. 'Psychological perceptions of height significantly influence social dynamics and behaviors,' the study pointed out. 'Understanding these associations can inform strategies for promoting positive body image and mental well-being, particularly among individuals who may feel marginalized by societal height standards.' Another study revealed that men who lack height also have narcissistic tendencies — and try to appear more powerful than they probably are. 'Shorter people with traits such as psychopathy [lack of empathy and antisocial behaviors] can use them to demand respect, impose costs on others and impress romantic partners,' said lead researcher Monika Koslowska from the University of Wrocław in Poland, originally reported by Men's Health. 'Appearing more powerful may, in turn, make other people perceive them as taller than they really are.' Men are not only trying to overcompensate for their lack of height, they're also being deceitful on dating apps by lying about or exaggerating their height — and single women are wising up by using Chat GPT to expose these short frauds. 'The girls are using ChatGPT to see if men are lying about their height on dating apps,' Justine Moore, a venture capitalist from San Francisco, California, revealed to 361,000 X users. 'Upload 4 pictures' to [Chat GPT]. It uses proportions and surroundings to estimate height,' she instructed in her shocking tweet. 'I tested it on 10 friends & family members,' Moore proudly wrote. 'All estimates were within 1 inch of their real height.' You almost can't blame men for telling a white lie on their dating profiles, considering researchers at Texas A&M International University found that 'Women considered taller men with larger SHRs [shoulder to hip ratio] as more attractive, masculine, dominant, and higher in fighting ability.' Their findings also pointed out that '…these sexually dimorphic features [height and a larger SHR] are a reflection of men's genetic quality.' Researchers found that women view men with these physical qualities as having 'the ability to provide direct benefits' such as 'protection, resource provisioning.'