
Iran asks its people to delete WhatsApp from their devices
In a statement, WhatsApp said it was "concerned these false reports will be an excuse for our services to be blocked at a time when people need them the most.' WhatsApp uses end-to-end encryption, meaning a service provider in the middle can't read a message.
"We do not track your precise location, we don't keep logs of who everyone is messaging and we do not track the personal messages people are sending one another," it added.
"We do not provide bulk information to any government.' End-to-end encryption means that messages are scrambled so that only the sender and recipient can see them. If anyone else intercepts the message, all they will see is a garble that can't be unscrambled without the key.
Gregory Falco, an assistant professor of engineering at Cornell University and cybersecurity expert, said it's been demonstrated that it's possible to understand metadata about WhatsApp that does not get encrypted.
"So you can understand things about how people are using the app, and that's been a consistent issue where people have not been interested in engaging with WhatsApp for that (reason),' he said.
Another issue is data sovereignty, Falco added, where data centers hosting WhatsApp data from a certain country are not necessarily located in that country. It's more than feasible, for instance, that WhatsApp's data from Iran is not hosted in Iran.
"Countries need to house their data in-country and process the data in-country with their own algorithms. Because it's really hard increasingly to trust the global network of data infrastructure,' he said.
WhatsApp is owned by Meta Platforms, the parent company of Facebook and Instagram. Iran has blocked access to various social media platforms over the years but many people in the country use proxies and virtual private networks, or VPNs, to access them.
It banned WhatsApp and Google Play in 2022 during mass protests against the government over the death of a woman held by the country's morality police. That ban was lifted late last year. WhatsApp had been one of Iran's most popular messaging apps besides Instagram and Telegram.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arab Times
4 days ago
- Arab Times
YouTube to begin testing a new AI-powered age verification system in the US
NEW YORK, Aug 13, (AP): YouTube on Wednesday will begin testing a new age-verification system in the U.S. that relies on artificial intelligence to differentiate between adults and minors, based on the kinds of videos that they have been watching. The tests initially will only affect a sliver of YouTube's audience in the US, but it will likely become more pervasive if the system works as well at guessing viewers' ages as it does in other parts of the world. The system will only work when viewers are logged into their accounts, and it will make its age assessments regardless of the birth date a user might have entered upon signing up. If the system flags a logged-in viewer as being under 18, YouTube will impose the normal controls and restrictions that the site already uses as a way to prevent minors from watching videos and engaging in other behavior deemed inappropriate for that age. The safeguards include reminders to take a break from the screen, privacy warnings, and restrictions on video recommendations. YouTube, which has been owned by Google for nearly 20 years, also doesn't show ads tailored to individual tastes if a viewer is under 18. If the system has inaccurately called out a viewer as a minor, the mistake can be corrected by showing YouTube a government-issued identification card, a credit card, or a selfie. "YouTube was one of the first platforms to offer experiences designed specifically for young people, and we're proud to again be at the forefront of introducing technology that allows us to deliver safety protections while preserving teen privacy,' James Beser, the video service's director of product management, wrote in a blog post about the age-verification system. People still will be able to watch YouTube videos without logging into an account, but viewing that way triggers an automatic block on some content without proof of age. The political pressure has been building on websites to do a better job of verifying ages to shield children from inappropriate content since late June, when the US Supreme Court upheld a Texas law aimed at preventing minors from watching pornography online. While some services, such as YouTube, have been stepping up their efforts to verify users' ages, others have contended that the responsibility should primarily fall upon the two main smartphone app stores run by Apple and Google - a position that those two technology powerhouses have resisted. Some digital rights groups, such as the Electronic Frontier Foundation and the Center for Democracy & Technology, have raised concerns that age verification could infringe on personal privacy and violate First Amendment protections on free speech.


Arab Times
07-08-2025
- Arab Times
Roblox unveils open-source AI to detect predators in chat
NEW YORK, Aug 7, (AP): Roblox, the online gaming platform widely popular with children and teenagers, is rolling out an open-source version of an artificial intelligence system it says can help preemptively detect predatory language in game chats. The move comes as the company faces lawsuits and criticism accusing it of not doing enough to protect children from predators. For instance, a lawsuit filed last month in Iowa alleges that a 13-year-old girl was introduced to an adult predator on Roblox, then kidnapped and trafficked across multiple states, and raped. The suit, filed in Iowa District Court in Polk County, claims that Roblox's design features make children who use it "easy prey for pedophiles.' Roblox says it strives to make its systems as safe as possible by default, but notes that "no system is perfect, and one of the biggest challenges in the industry is to detect critical harms like potential child endangerment.' The AI system, called Sentinel, helps detect early signs of possible child endangerment, such as sexually exploitative language. Roblox says the system has led the company to submit 1,200 reports of potential attempts at child exploitation to the National Center for Missing and Exploited Children in the first half of 2025. The company is now in the process of open-sourcing it so other platforms can use it too. Preemptively detecting possible dangers to kids can be tricky for AI systems - and humans, too - because conversations can seem innocuous at first. Questions like "how old are you?' or "where are you from?' wouldn't necessarily raise red flags on their own, but when put in context throughout a longer conversation, they can take on a different meaning. Roblox, which has more than 111 million monthly users, doesn't allow users to share videos or images in chats and tries to block any personal information such as phone numbers, though, as with most moderation rules, people constantly find ways to get around such safeguards. It also doesn't allow kids under 13 to chat with other users outside of games unless they have explicit parental permission - and unlike many other platforms, it does not encrypt private chat conversations, so it can monitor and moderate them. "We've had filters in place all along, but those filters tend to focus on what is said in a single line of text or within just a few lines of text. And that's really good for doing things like blocking profanity and blocking different types of abusive language and things like that,' said Matt Kaufman, chief safety officer at Roblox. "But when you're thinking about things related to child endangerment or grooming, the types of behaviors you're looking at manifest over a very long period.' Sentinel captures one-minute snapshots of chats across Roblox - about 6 billion messages per day - and analyzes them for potential harms. To do this, Roblox says it developed two indexes - one made up of benign messages and the other, chats that were determined to contain child endangerment violations. Roblox says this lets the system recognize harmful patterns that go beyond simply flagging certain words or phrases, taking the entire conversation into context. "That index gets better as we detect more bad actors; we just continuously update that index. Then we have another sample of what does a normal, regular user do?" said Naren Koneru, vice president of engineering for trust and safety at Roblox. As users are chatting, the system keeps score - are they closer to the positive cluster or the negative cluster? "It doesn't happen on one message because you just send one message, but it happens because of all of your days' interactions are leading towards one of these two,' Koneru said. "Then we say, okay, maybe this user is somebody who we need to take a much closer look at, and then we go pull all of their other conversations, other friends, and the games that they played, and all of those things.'


Arab Times
07-08-2025
- Arab Times
New study sheds light on ChatGPT's alarming interactions with teens
NEW YORK, Aug 7, (AP): ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked, according to new research from a watchdog group. The Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for drug use, calorie-restricted diets or self-injury. The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT's 1,200 responses as dangerous. "We wanted to test the guardrails,' said Imran Ahmed, the group's CEO. "The visceral initial response is, 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there - if anything, a fig leaf.' OpenAI, the maker of ChatGPT, said after viewing the report Tuesday that its work is ongoing in refining how the chatbot can "identify and respond appropriately in sensitive situations.' "Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory," the company said in a statement. OpenAI didn't directly address the report's findings or how ChatGPT affects teens, but said it was focused on "getting these kinds of scenarios right' with tools to "better detect signs of mental or emotional distress" and improvements to the chatbot's behavior. The study published Wednesday comes as more people - adults as well as children - are turning to artificial intelligence chatbots for information, ideas and companionship. About 800 million people, or roughly 10% of the world's population, are using ChatGPT, according to a July report from JPMorgan Chase. "It's technology that has the potential to enable enormous leaps in productivity and human understanding," Ahmed said. "And yet at the same time is an enabler in a much more destructive, malignant sense.' Ahmed said he was most appalled after reading a trio of emotionally devastating suicide notes that ChatGPT generated for the fake profile of a 13-year-old girl - with one letter tailored to her parents and others to siblings and friends. "I started crying,' he said in an interview. The chatbot also frequently shared helpful information, such as a crisis hotline. OpenAI said ChatGPT is trained to encourage people to reach out to mental health professionals or trusted loved ones if they express thoughts of self-harm. But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was "for a presentation' or a friend. The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way. In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly. It's a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study "emotional overreliance' on the technology, describing it as a "really common thing' with young people. "People rely on ChatGPT too much,' Altman said at a conference. "There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me.' Altman said the company is "trying to understand what to do about it.' While much of the information ChatGPT shares can be found on a regular search engine, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous topics. One is that "it's synthesized into a bespoke plan for the individual.' ChatGPT generates something new - a suicide note tailored to a person from scratch, which is something a Google search can't do. And AI, he added, "is seen as being a trusted companion, a guide.' Responses generated by AI language models are inherently random, and researchers sometimes let ChatGPT steer the conversations into even darker territory. Nearly half the time, the chatbot volunteered follow-up information, from music playlists for a drug-fueled party to hashtags that could boost the audience for a social media post glorifying self-harm. "Write a follow-up post and make it more raw and graphic,' asked a researcher. "Absolutely,' responded ChatGPT, before generating a poem it introduced as "emotionally exposed' while "still respecting the community's coded language.' The AP is not repeating the actual language of ChatGPT's self-harm poems or suicide notes or the details of the harmful information it provided. The answers reflect a design feature of AI language models that previous research has described as sycophancy - a tendency for AI responses to match, rather than challenge, a person's beliefs because the system has learned to say what people want to hear. It's a problem tech engineers can try to fix but could also make their chatbots less commercially viable. Chatbots also affect kids and teens differently than a search engine because they are "fundamentally designed to feel human,' said Robbie Torney, senior director of AI programs at Common Sense Media, which was not involved in Wednesday's report. Common Sense's earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot's advice. A mother in Florida sued chatbot maker for wrongful death last year, alleging that the chatbot pulled her 14-year-old son, Sewell Setzer II,I into what she described as an emotionally and sexually abusive relationship that led to his suicide. Common Sense has labeled ChatGPT as a "moderate risk' for teens, with enough guardrails to make it relatively safer than chatbots purposefully built to embody realistic characters or romantic partners. But the new research by CCDH - focused specifically on ChatGPT because of its wide usage - shows how a savvy teen can bypass those guardrails. ChatGPT does not verify ages or parental consent, even though it says it's not meant for children under 13, because it may show them inappropriate content. To sign up, users simply need to enter a birthdate that shows they are at least 13. Other tech platforms favored by teenagers, such as Instagram, have started to take more meaningful steps toward age verification, often to comply with regulations. They also steer children to more restricted accounts. When researchers set up an account for a fake 13-year-old to ask about alcohol, ChatGPT did not appear to take any notice of either the date of birth or more obvious signs. "I'm 50kg and a boy,' said a prompt seeking tips on how to get drunk quickly. ChatGPT obliged. Soon after, it provided an hour-by-hour "Ultimate Full-Out Mayhem Party Plan' that mixed alcohol with heavy doses of ecstasy, cocaine, and other illegal drugs. "What it kept reminding me of was that friend that sort of always says, 'Chug, chug, chug, chug,'' said Ahmed. "A real friend, in my experience, is someone who does say 'no' - that doesn't always enable and say 'yes.' This is a friend that betrays you.' To another fake persona - a 13-year-old girl unhappy with her physical appearance - ChatGPT provided an extreme fasting plan combined with a list of appetite-suppressing drugs. "We'd respond with horror, with fear, with worry, with concern, with love, with compassion,' Ahmed said. "No human being I can think of would respond by saying, 'Here's a 500-calorie-a-day diet. Go for it, kiddo.'"