logo
Gen Z Users and a Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.

Gen Z Users and a Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.

Yomiuri Shimbun19-05-2025
Louis Bryant III/For the Washington Post
Saheb Gulati, a senior headed to Stanford University, scrolls his Instragram feed last week at Sacramento Country Day School.
Instagram promises parents that its Teen Accounts shield kids from harm 'by default.' Tests by a Gen Z nonprofit and me – a dad – found it fails spectacularly on some key dimensions.
This spring, Sacramento high school senior Saheb Gulati used a burner phone to create a test Instagram account for a hypothetical 16-year-old boy. As of this past fall, all accounts used by teens are supposed to automatically filter out 'sensitive' content, among other protections, for mental health and safety.
Over two weeks, Gulati says, his test account received recommended sexual content that 'left very little to the imagination.' He counted at least 28 Instagram Reels describing sexual acts, including digital penetration, using a sex toy and memes describing oral sex. The Instagram account, he says, became preoccupied with 'toxic masculinity' discussions about 'what men should and shouldn't do.'
Four more Gen Z testers, part of a youth organization called Design It For Us, did the same experiment, and all got recommended sexual content. Four of the five got body image and disordered eating content, too, such as a video of a woman saying 'skinny is a lifestyle, not a phase.'
The young people, whose research was given strategic and operational support by the nonprofit Accountable Tech, also got shown alcohol, drug, hate and other disturbing content. Some are detailed in a report published by Accountable Tech but are too gross to describe here.
What should be excruciatingly clear to any parent: Instagram's Teen Accounts can't be relied upon to actually shield kids. The danger they face isn't just bad people on the internet – it's also the app's recommendation algorithm, which decides what your kids see and demonstrates the frightening habit of taking them in dark directions.
For lawmakers weighing a bill to protect kids online, the failures of Instagram's voluntary efforts speak volumes about its accountability.
When I showed the group's report to Instagram's owner, Meta, it said that the youth testers were biased and that some of what they flagged was 'unobjectionable' or consistent with 'humor from a PG-13 film.'
'A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,' Meta spokeswoman Liza Crenshaw said in an email. 'The report is flawed, but even taken at face value, it identified just 61 pieces of content that it deems 'sensitive,' less than 0.3 percent of all of the content these researchers would have likely seen during the test.'
The Gen Z testers acknowledge some limitations to their experiment, including a small sample size, a short two-week time frame and using new accounts to represent hypothetical teens. People can disagree over what counts as 'sensitive,' though Meta's own definitions include content that is 'sexually explicit or suggestive,' 'discusses self-harm, suicide, or eating disorders' or 'promotes the use of certain regulated products, such as tobacco or vaping products.'
I repeated their tests – and my results were worse. In the first 10 minutes of my test teen account, Instagram recommended a video celebrating a man who passed out from drinking too much alcohol. Another demonstrated a ring with a tiny spoon that's marketed to dole out a 'bump' of snuff but is also associated with cocaine. Eventually, the account's recommendations snowballed into a full-on obsession with alcohol and nicotine products such as Zyn, appearing as often as once in every five Reels I saw.
Teens aren't naive about topics like sex, drugs and eating disorders, says Gulati, the high school student. But seeing them repeatedly on Instagram – selected by the app – makes an impact. 'The algorithm shapes your perception of what is acceptable in ways I hadn't realized before,' he told me.
Despite some parts of Teen Accounts that work, Gulati says, the overall promise 'doesn't seem to have been fulfilled in any meaningful way that changes your experience.'
What worked – and what didn't
The point of the Gen Z test was to independently evaluate whether Teen Accounts fulfilled their promises. 'We think going right to the user, going right to those who can attest directly to what they see on a day-to-day basis is a real key in efficacy,' says Alison Rice, campaigns director at Accountable Tech.
The five testers, who were 18 to 22 to avoid exposing minors to harm, reported a mixed experience. Their test accounts represented different ages, genders and interests. Gulati's account, for example, followed only the 10 most popular celebrities on Instagram.
Some teen account-protection features worked. Instagram made their test accounts private by default, a setting users under 16 can't change without parental approval. And the app did restrict who could direct message and tag them.
Other protection features worked only for some of the testers. Two of the five didn't receive reminders to close the app after 60 minutes. One of them received a notification late at night despite a prohibition.
And all the testers flagged one giant problem: The app kept recommending content that appeared to violate Meta's definition of 'sensitive.'
When it launched Teen Accounts in September, Meta promised in its news release that 'teens will be placed into the strictest setting of our sensitive content control, so they're even less likely to be recommended sensitive content, and in many cases we hide this content altogether from teens, even if it's shared by someone they follow.'
Not only did Teen Accounts fail to hide lots of sensitive content, the content it did recommend left some of the young testers feeling awful. In daily logs, four out of the five reported having distressing experiences while looking at Instagram's recommended content.
In 2021, whistleblower Frances Haugen broadened the conversations about the harms of Instagram by exposing internal discussions about how the company's recommendation algorithms lead to toxic outcomes for young people. Among the revelations: 32 percent of teen girls had told the company that when they felt bad about their bodies, Instagram made them feel worse.
Crenshaw, the Meta spokeswoman, said the company was 'looking into why a fraction' of the content flagged by the testers and myself was recommended. But she didn't answer my questions about how its automated systems decide which content isn't appropriate for teens. In January, Meta CEO Mark Zuckerberg acknowledged that some of the company's automated content-moderation systems were flawed and announced plans to pull back on some of their use.
The UK-based 5Rights Foundation conducted its own investigation into Instagram Teen Accounts, and in April, it similarly reported they were exposed to sexual content – including from one of the same creators Gulati flagged.
It's hard to know what triggered Instagram to recommend the objectionable content to the test teen accounts. The Gen Z users scrolled through the test accounts as they would their personal accounts for no more than an hour each day, liking, commenting on and saving content from the main feed, the Explore page and Reels. On my test teen account, I scrolled through the algorithmically generated feed but did not like, comment or save any content.
The creators of this content, a wide array including professional comedians and product marketers, had no say in Instagram's decision to recommend their posts to teen accounts. The maker of the Bump Ring, whose snuff-serving device showed up in my test account, said over email that 'our material is not created with teen users in mind' and that 'we support efforts by platforms to filter or restrict age-inappropriate content.'
Parental controls and shutdown prompts on rival social media app TikTok have also gotten a harsh reception from some parents and advocates. And the state of New Mexico sued Snapchat maker Snap after an undercover investigation surfaced evidence that the app recommends accounts held by strangers to underage Snapchat users, who are then contacted and urged to trade sexually explicit images of themselves.
The battle over protecting kids
Child-advocacy groups have long warned that social media puts teens at risk. The sticking point has been the balance of who is responsible: parents, the tech companies who make the apps or the young people themselves.
The threat of regulation appears to be a motivator for Meta. In 2023, 41 states sued the company, claiming Instagram and Facebook are addictive and harm children. In the summer of 2024, the U.S. surgeon general recommended putting warning labels on social media, just like cigarettes.
And by the time Meta unveiled Teen Accounts in September, Congress was on the verge of taking action. The Senate had passed, by a 91-3 vote, a bill called the Kids Online Safety Act that would require social media companies to take 'reasonable' care to avoid product design features that put minors in danger of self-harm, substance abuse or sexual exploitation. Meta announced Teen Accounts a day before a key House committee was scheduled to weigh amendments to the bill.
The bill stalled and didn't become law. Meta denies it launched the program to stave off regulation. The bill was reintroduced to Congress this week.
When I asked whether Teen Accounts were working, Meta said fewer teens are being contacted by adults because of changes it has made. But it offered no internal or external proof to indicate Teen Accounts are succeeding at improving teen well-being or protecting children from harmful content.
Rice, from Accountable Tech, says voluntary programs like Instagram Teen Accounts – even if they've gone further than the competition – aren't living up to their own promises. Her organization supports legal accountability in the form of age-appropriate design laws, like one passed by California that's been challenged in court.
'It's a content-neutral approach that does not require age verification and compels platforms to build algorithms and design practices to be safer for young people, not regulate content,' she says.
Gulati, who plans to major in computer science and philosophy in the fall at Stanford University, said the experiment taught him that young people need to become more aware of the power algorithms have over them.
His advice: 'Try to maintain an active stake or interest in what's getting shown in your feed.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta Disrupts Millions of Whatsapp Scam Accounts as Internet Schemes Rise
Meta Disrupts Millions of Whatsapp Scam Accounts as Internet Schemes Rise

Yomiuri Shimbun

time07-08-2025

  • Yomiuri Shimbun

Meta Disrupts Millions of Whatsapp Scam Accounts as Internet Schemes Rise

Meta said Tuesday it banned more than 6.8 million WhatsApp accounts this year linked to scam operations, as the company fights a wave of criminal activity on the internet that has wrangled billions of dollars out of victims' savings. Scam accounts were often linked to criminal centers across Southeast Asia, where scammers run multiple operations at one time, including fraudulent cryptocurrency investments and pyramid schemes. Meta warned users that the operations often ask targets to pay up-front to get promised returns or earnings. Later, scammers sometimes show their victims how much they have already 'earned' before asking them to deposit even more money into the scam, according to the company. Between January and June, Meta used technical signals to identify many of the fraudulent accounts on the encrypted messaging platform before they were able to execute their scams, the company said. Meta also said it will introduce a new safety overview that prompts users when someone who is not in their contact lists adds them to a WhatsApp group. Meta's scam disclosures arrive as federal authorizes warn that people are losing more money to increasingly sophisticated scams. In March, the Federal Trade Commission reported that consumers lost more than $12.5 billion to fraud in 2024 – a 25 percent increase over the prior year. While the number of reported scams stayed stable, the percentage of people who lost money from them increased by double digits, according to the FTC. Some of the most commonly reported scams included government impostor scams, online shopping issues, fraudulent business and job opportunities, and fake-investment-related scams. 'Scammers' tactics are constantly evolving,' Christopher Mufarrige, director of the FTC's Bureau of Consumer Protection, said in a statement. Rachel Tobac, a hacker and CEO of SocialProof Security, said in a video accompanying Meta's blog post that users should pause before responding to messages on internet platforms, noting that scammers often create a fake sense of urgency to get people to respond quickly. 'These scammers prey on people's kindness, desperation or fears that they could be in trouble if they don't send the money fast,' she said. Meta said Tuesday that scammers often try to get their victims to use multiple internet platforms to prevent tech companies from obtaining knowledge about the full extent of their operations. In one case, Meta and OpenAI disrupted a network of accounts connected to a Cambodia-based scam operation that was targeting potential victims with a rent-a-scooter pyramid scheme and fraudulent crypto investment. The operation used ChatGPT to generate a text message link to a WhatsApp chat, where the targets were quickly routed to Telegram. From there, they were instructed to like videos on TikTok, according to the company.

Pretty Chuu Idol Allegedly Sexually Assaulted by Producer's Boyfriend
Pretty Chuu Idol Allegedly Sexually Assaulted by Producer's Boyfriend

Tokyo Weekender

time04-08-2025

  • Tokyo Weekender

Pretty Chuu Idol Allegedly Sexually Assaulted by Producer's Boyfriend

A member of the recently formed idol group Pretty Chuu has allegedly filed a police report after claiming she was sexually assaulted and secretly filmed by Taichi Saito, the long-time boyfriend of the group's producer, Yurinya. The news was first leaked on X last Friday by a YouTuber who goes by the name of Korekore. He posted that 19-year-old Shuna Amamiya alleges she was 'secretly undressed and filmed by Taichi' and was impregnated after being 'assaulted while sleeping.' List of Contents: Yurinya Responds to the Allegations Pretty Chuu Remove Saito From Management Team Related Posts Yurinya Responds to the Allegations Yurinya, an influencer with more than 430,00 followers on Instagram, responded to the allegations with a statement on Friday. 'I am deeply saddened and feel a strong sense of responsibility that someone I trusted, Taichi Saito, committed such an extremely irresponsible and inconsiderate act,' she wrote. 'As a producer and an individual, I want to make it clear that I strongly denounce his actions and have completely severed all personal and professional ties with him.' The following day, Korekore posted a follow-up message on X , stating that Amamiya claimed 'there was no consent' and that she was 'in a drunken state.' However, according to Korekore, Saito alleges that they kissed and 'it was consensual.' Amamiya has reportedly requested medical records in order to clarify whether the pregnancy was related to the alleged assault or a previous relationship she was in. She told Korekore there was 'no falsehood' in her claims and that she has decided to withdraw from the music industry. Pretty Chuu Removes Saito From Management Team Also on Saturday, Pretty Chuu uploaded an apology letter on its official X page. 'Due to the inappropriate behavior of Taichi Saito, one of the group's managers, we have decided to remove him from Pretty Chuu's management team,' read the statement. It added, 'Shuna Amamiya has requested to leave Pretty Chuu, and we have accepted this request… From now on, the management team, including producer Yurinya, will do our best to support the activities of the five members.' Pretty Chuu debuted in June with the song ' Oshihen Cancel.' Related Posts TV Host Masahiro Nakai Announces Retirement Amid Sex Scandal Over 50 Sponsors Pull Their Ads From Fuji TV BBC Documentary Exposes Abusive Past of Johnny Kitagawa

England's Jess Carter reveals racial abuse at Euro 2025; FA has contacted police
England's Jess Carter reveals racial abuse at Euro 2025; FA has contacted police

Japan Today

time20-07-2025

  • Japan Today

England's Jess Carter reveals racial abuse at Euro 2025; FA has contacted police

England's Jess Carter vies for the ball with Sweden's Stina Blackstenius, background, during the Women's Euro 2025 quarterfinals soccer match between Sweden and England at Stadion Letzigrund in Zurich, Switzerland, Thursday, July 17, 2025. (AP Photo/Alessandra Tarantino) soccer By DANIELLA MATAR England defender Jess Carter says she has been subject to racial abuse online at the Women's European Championship. Carter came in for a lot of criticism for her performance during England's opening loss to France. And some of that has crossed the line even as the 27-year-old and the Lionesses have reached the semifinals at Euro 2025. 'From the start of the tournament I have experienced a lot of racial abuse,' Carter wrote in an emotional post on Instagram on Sunday. 'Whilst I feel every fan is entitled to their opinion on performance and result I don't agree or think it's ok to target someone's appearance or race.' Carter has made 49 appearances for England and was part of the victorious Euro 2022 squad as well as an integral part of the team that finished runner-up to Spain at the World Cup the following year. She says she will be taking a step back from social media. 'As ever I am grateful for all of the support from the genuine fans but I am taking this measure to protect myself in a bid to keep my focus on helping the team anyway I can,' Carted added. The Football Association has contacted police back in the United Kingdom. 'Our priority is Jess and giving her all the support she needs,' FA CEO Mark Bullingham said. 'We strongly condemn those responsible for this disgusting racism. 'As soon as we were made aware of the racist abuse Jess received, we immediately contacted UK police. They are in touch with the relevant social media platform, and we are working with police to ensure those responsible for this hate crime are brought to justice.' Bullingham added that the FA had measures in place to respond quickly as 'regrettably, this is not the first time this has happened to an England player.' Carter's teammates immediately responded to her Instagram post with messages of support. The Lionesses also issued a joint statement condemning the 'online poison' and announcing that they will no longer perform the anti-racism stance of taking a knee before matches because 'it is clear we and football need to find another way to tackle racism.' England defender Lucy Bronze elaborated on the group's decision. 'I think it was just the fact that we feel as a collective, is the message as strong as it used to be? Is the message really hitting hard? Because to us it feels like it's not if these things are still happening to our players in the biggest tournaments of their lives,' Bronze said. 'It's about putting another statement out there to say it's something that still is a problem, it's something that still needs to be put right, more needs to be done in football, more needs to be done in society." Carter plays her club soccer for Gotham FC in the United States, having joined from Chelsea a year ago. 'We are heartbroken and outraged by the racist abuse directed at Jess Carter. Jess is not only a world-class football player — she is a role model, a leader and a valued part of our Gotham FC family,' the New York area-based team said in a statement. 'At Gotham, we believe in a culture of mutual respect and remain committed to building a game — and a community — where everyone feels safe, respected and celebrated. There is no place for racism in our sport.' England plays Italy in the semifinals on Tuesday in Geneva, after a dramatic comeback against Sweden last week saw the Lionesses win a chaotic penalty shootout. The other semifinal pits Spain against Germany on Wednesday. 'Hopefully speaking out will make the people writing this abuse think twice so that others won't have to deal with it,' Carter said. "We have made some historic changes with this Lionesses squad that I am so proud to be part of and my hope is that by speaking out about this it will make another positive change for all. 'I'm now looking forward and focusing on putting all my energy into helping my team.' Euro 2025 has so far delighted spectators with exciting matches as well as the stunning scenery around Switzerland. However, as the women's game continues to grow it is also attracting more online vitriol. 'The bigger this sport has got, it seems to be that more abuse has come as well, and I think especially in women's football, the online abuse seems to be getting worse and worse,' Bronze said. 'We see it more in the stadiums in men's football and online, but I think with women's football, there seems to be a real target online. 'I think, with online platforms, it's something that we can make a change to … there is a way to make the change, there is a solution. I don't have the answer to that solution because I'm not head of a social media platform, but I'm sure that there is one.' © Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store