Latest news with #RachelTobac
Yahoo
a day ago
- Yahoo
Meta AI searches made public - but do all its users realise?
How would you feel if your internet search history was put online for others to see? That may be happening to some users of Meta AI without them realising, as people's prompts to the artificial intelligence tool - and the results - are posted on a public feed. One internet safety expert said it was "a huge user experience and security problem" as some posts are easily traceable, through usernames and profile pictures, to social media accounts. This means some people may be unwittingly telling the world about things they may not want others to know they have searched for - such as asking the AI to generate scantily-clad characters or help them cheat on tests. Meta has been contacted for comment. It is not clear if the users know that their searches are being posted into a public feed on the Meta AI app and website, though the process is not automatic. If people choose to share a post, a message pops up which says: "Prompts you post are public and visible to everyone... Avoid sharing personal or sensitive information." The BBC found several examples of people uploading photos of school or university test questions, and asking Meta AI for answers. One of the chats is titled "Generative AI tackles math problems with ease". There were also searches for women and anthropomorphic animal characters wearing very little clothing. One search, which could be traced back to a person's Instagram account because of their username and profile picture, asked Meta AI to generate an image of an animated character lying outside wearing only underwear. Meanwhile, tech news outlet TechCrunch has reported examples of people posting intimate medical questions - such as how to deal with an inner thigh rash. Meta AI, launched earlier this year, can be accessed through its social media platforms Facebook, Instagram and Whatsapp. It is also available as a standalone product which has a public "Discover" feed. Users can opt to make their searches private in their account settings. Meta AI is currently available in the UK through a browser, while in the US it can be used through an app. In a press release from April which announced Meta AI, the company said there would be "a Discover feed, a place to share and explore how others are using AI". "You're in control: nothing is shared to your feed unless you choose to post it," it said. But Rachel Tobac, chief executive of US cyber security company Social Proof Security, posted on X saying: "If a user's expectations about how a tool functions don't match reality, you've got yourself a huge user experience and security problem." She added that people do not expect their AI chatbot interactions to be made public on a feed normally associated with social media. "Because of this, users are inadvertently posting sensitive info to a public feed with their identity linked," she said. Meta urged to go further in crackdown on 'nudify' apps WhatsApp defends 'optional' AI tool that cannot be turned off Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
Yahoo
2 days ago
- Yahoo
The Meta AI app is a privacy disaster
It sounds like the start of a 21st-century horror film: Your browser history has been public all along, and you had no idea. That's basically what it feels like right now on the new stand-alone Meta AI app, where swathes of people are publishing their ostensibly private conversations with the chatbot. When you ask the AI a question, you have the option of hitting a share button, which then directs you to a screen showing a preview of the post, which you can then publish. But some users appear blissfully unaware that they are sharing these text conversations, audio clips, and images publicly with the world. When I woke up this morning, I did not expect to hear an audio recording of a man in a Southern accent asking, 'Hey, Meta, why do some farts stink more than other farts?' Flatulence-related inquiries are the least of Meta's problems. On the Meta AI app, I have seen people ask for help with tax evasion, if their family members would be arrested for their proximity to white-collar crimes, or how to write a character reference letter for an employee facing legal troubles, with that person's first and last name included. Others, like security expert Rachel Tobac, found examples of people's home addresses and sensitive court details, among other private information. When reached by TechCrunch, a Meta spokesperson did not comment on the record. Whether you admit to committing a crime or having a weird rash, this is a privacy nightmare. Meta does not indicate to users what their privacy settings are as they post, or where they are even posting to. So, if you log into Meta AI with Instagram, and your Instagram account is public, then so too are your searches about how to meet 'big booty women.' Much of this could have been avoided if Meta didn't ship an app with the bonkers idea that people would want to see each other's conversations with Meta AI, or if anyone at Meta could have foreseen that this kind of feature would be problematic. There is a reason why Google has never tried to turn its search engine into a social media feed — or why AOL's publication of pseudonymized users' searches in 2006 went so badly. It's a recipe for disaster. According to Appfigures, an app intelligence firm, the Meta AI app has only been downloaded 6.5 million times since it debuted on April 29. That might be impressive for an indie app, but we aren't talking about a first-time developer making a niche game. This is one of the world's wealthiest companies sharing an app with technology that it's invested billions of dollars into. As each second passes, these seemingly innocuous inquiries on the Meta AI app inch closer to a viral mess. In a matter of hours, more and more posts have appeared on the app that indicate clear trolling, like someone sharing their résumé and asking for a cybersecurity job, or an account with a Pepe the Frog avatar asking how to make a water bottle bong. If Meta wanted to get people to actually use its Meta AI app, then public embarrassment is certainly one way of getting attention. Sign in to access your portfolio


TechCrunch
2 days ago
- TechCrunch
The Meta AI app is a privacy disaster
It sounds like the start of a 21st century horror film: Your browser history has been public all along, and you had no idea. That's basically what it feels like right now on the new standalone Meta AI app, where swathes of people are publishing their ostensibly private conversations with the chatbot. When you ask the AI a question, you have the option of hitting a share button, which then directs you to a screen showing a preview of the post, which you can then publish. But some users appear blissfully unaware that they are sharing these text conversations, audio clips, and images publicly with the world. When I woke up this morning, I did not expect to hear an audio recording of a man in a southern accent asking, 'Hey Meta, why do some farts stink more than other farts?' Flatulence-related inquiries are the least of Meta's problems. On the Meta AI app, I have seen people ask for help with tax evasion, if their family members would be arrested for their proximity to white collar crimes, or how to write a character reference letter for an employee facing legal troubles, with that person's first and last name included. Others, like security expert Rachel Tobac, found examples of people's home addresses and sensitive court details, among other private information. When reached by TechCrunch, a Meta spokesperson did not comment on the record. Image Credits:Screenshots from Meta AI app taken by TechCrunch Whether you admit to committing a crime or having a weird rash, this is a privacy nightmare. Meta does not indicate to users what their privacy settings are as they post, or where they are even posting to. So, if you log into Meta AI with Instagram, and your Instagram account is public, then so too are your searches about how to meet 'big booty women.' Much of this could have been avoided if Meta didn't ship an app with the bonkers idea that people would want to see each other's conversations with Meta AI, or if anyone at Meta could have foreseen that this kind of feature would be problematic. There is a reason why Google has never tried to turn its search engine into a social media feed — or why AOL's publication of pseudonymized users' searches in 2006 went so badly. It's a recipe for disaster. According to Appfigures, an app intelligence firm, the Meta AI app has only been downloaded 6.5 million times since it debuted on April 29. That might be impressive for an indie app, but we aren't talking about a first-time developer making a niche game. This is one of the world's wealthiest companies sharing an app with technology that it's invested billions of dollars into. Image Credits:Screenshots from Meta AI app taken by TechCrunch As each second passes, these seemingly innocuous inquiries on the Meta AI app inch closer to a viral mess. In a matter of hours, more and more posts have appeared on the app that indicate clear trolling, like someone sharing their resume and asking for a cybersecurity job, or an account with a Pepe the Frog avatar asking how to make a water bottle bong. If Meta wanted to get people to actually use its Meta AI app, then public embarrassment is certainly one way of getting attention.


TECHx
14-02-2025
- TECHx
Scammers Are Looking for Love Too – Here's How to Stay Safe Online - TECHx Media Scammers Are Looking for Love Too – Here's How to Stay Safe Online
Scammers Are Looking for Love Too – Here's How to Stay Safe Online Valentine's Day is a time for love and connection, but for scammers, it's also a prime opportunity to exploit emotions and financial vulnerabilities. Romance scams have become an all-too-common threat, with fraudsters posing as attractive individuals, successful professionals, or even military personnel to deceive unsuspecting victims. These scams are not just limited to dating apps—they spread through social media, emails, and even discussion forums. On February 12, Meta joined forces with leading internet safety expert and ethical hacker Rachel Tobac to share crucial tips on spotting and avoiding romance scams. Their message is clear: staying vigilant and informed is the best defense. How Romance Scammers Operate Romance scammers craft elaborate personas, often using stolen or fake identities to gain trust. They engage victims in heartfelt conversations, build an emotional connection, and then make urgent financial requests—often disguised as emergencies, travel expenses, or investment opportunities. Sometimes, they impersonate celebrities, leveraging their public appeal to manipulate victims. How to Protect Yourself 1. Beware of Unsolicited Messages Scammers frequently initiate contact through 'cold' messages—random connection requests on social media, dating platforms, or messaging apps. If someone you don't know suddenly reaches out, be cautious. Utilize in-app settings on platforms like Messenger, Instagram, and WhatsApp to control who can contact you. 2. Verify, Verify, Verify Be politely paranoid when engaging with new connections online. If someone seems too good to be true, take steps to verify their identity: Look up their profile details online. Check when their account was created—new or sparse profiles can be red flags. Conduct a reverse image search to see if their photos have been stolen from elsewhere. 3. Never Send Money or Personal Information If someone you've just met online asks for money, gift cards, or sensitive details, it's likely a scam. Scammers often fabricate urgent crises to push their victims into making quick payments. Remember: genuine connections do not require financial transactions. How Meta is Combating Romance Scams Beyond raising awareness, Meta actively detects and removes fraudulent accounts. Through collaboration with open-source researchers at Graphika, Meta has identified and shut down scam networks, blocked fraudulent websites, and strengthened its enforcement efforts. As digital fraud tactics evolve, so must our awareness. This Valentine's Day, protect your heart—and your wallet—by staying informed, skeptical, and proactive. Love is real, but so are the scams. Stay safe.
Yahoo
12-02-2025
- Yahoo
Meta warns users not to fall for romance scammers posing as celebrities or military
Think you might have met someone 'attractive, single and successful' on Facebook or Instagram? You might want to think again, Meta says. Ahead of Valentine's Day, the company is once again warning users not to fall for romance scams. These kinds of schemes, in which scammers create fictitious identities to form online relationships with unsuspecting victims, aren't exactly new. (The FTC says that people lost more than a half billion dollars to romance scams in 2021.) But the people behind these scams are apparently persistent. Meta says that already in 2025 it's taken down more than 116,000 accounts and pages across Facebook and Instagram that were linked to romance scams. In 2024, it removed more than 408,000 such accounts. According to Meta, these scam accounts often originate in West African countries with scammers impersonating members of the US military or famous celebrities. In both cases, they'll claim to be 'looking for love' and will strike up conversations with people on Facebook, Instagram and WhatsApp as well as other messaging platforms. Eventually, the scammer will request gift cards, crypto, or other types of payments. See for yourself — The Yodel is the go-to source for daily news, entertainment and feel-good stories. By signing up, you agree to our Terms and Privacy Policy. Meta has taken steps to fight these types of schemes. The company said last year it would bring back facial recognition tech to address celebrity impersonation. It also works with other companies to shut down organized groups of scammers. Still, David Agranovich, director of threat disruption at Meta, noted that "scammers evolve consistently." Researchers also say that AI has made it even easier for scammers to assume convincing fictitious identities. 'In the last three or four months, there's a couple of different tools that have come out where they're free, they're accessible, they're easy to use, and they allow the attacker to transform their face dynamically within the video call,' Rachel Tobac CEO of SocialProof Security said during a call with reporters. 'They can also use these deepfake bots that allow you to build a persona, place phone calls, use a voice clone and a human actually doesn't even need to be involved.'