
Gen Z Users and a Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.
Louis Bryant III/For the Washington Post
Saheb Gulati, a senior headed to Stanford University, scrolls his Instragram feed last week at Sacramento Country Day School.
Instagram promises parents that its Teen Accounts shield kids from harm 'by default.' Tests by a Gen Z nonprofit and me – a dad – found it fails spectacularly on some key dimensions.
This spring, Sacramento high school senior Saheb Gulati used a burner phone to create a test Instagram account for a hypothetical 16-year-old boy. As of this past fall, all accounts used by teens are supposed to automatically filter out 'sensitive' content, among other protections, for mental health and safety.
Over two weeks, Gulati says, his test account received recommended sexual content that 'left very little to the imagination.' He counted at least 28 Instagram Reels describing sexual acts, including digital penetration, using a sex toy and memes describing oral sex. The Instagram account, he says, became preoccupied with 'toxic masculinity' discussions about 'what men should and shouldn't do.'
Four more Gen Z testers, part of a youth organization called Design It For Us, did the same experiment, and all got recommended sexual content. Four of the five got body image and disordered eating content, too, such as a video of a woman saying 'skinny is a lifestyle, not a phase.'
The young people, whose research was given strategic and operational support by the nonprofit Accountable Tech, also got shown alcohol, drug, hate and other disturbing content. Some are detailed in a report published by Accountable Tech but are too gross to describe here.
What should be excruciatingly clear to any parent: Instagram's Teen Accounts can't be relied upon to actually shield kids. The danger they face isn't just bad people on the internet – it's also the app's recommendation algorithm, which decides what your kids see and demonstrates the frightening habit of taking them in dark directions.
For lawmakers weighing a bill to protect kids online, the failures of Instagram's voluntary efforts speak volumes about its accountability.
When I showed the group's report to Instagram's owner, Meta, it said that the youth testers were biased and that some of what they flagged was 'unobjectionable' or consistent with 'humor from a PG-13 film.'
'A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,' Meta spokeswoman Liza Crenshaw said in an email. 'The report is flawed, but even taken at face value, it identified just 61 pieces of content that it deems 'sensitive,' less than 0.3 percent of all of the content these researchers would have likely seen during the test.'
The Gen Z testers acknowledge some limitations to their experiment, including a small sample size, a short two-week time frame and using new accounts to represent hypothetical teens. People can disagree over what counts as 'sensitive,' though Meta's own definitions include content that is 'sexually explicit or suggestive,' 'discusses self-harm, suicide, or eating disorders' or 'promotes the use of certain regulated products, such as tobacco or vaping products.'
I repeated their tests – and my results were worse. In the first 10 minutes of my test teen account, Instagram recommended a video celebrating a man who passed out from drinking too much alcohol. Another demonstrated a ring with a tiny spoon that's marketed to dole out a 'bump' of snuff but is also associated with cocaine. Eventually, the account's recommendations snowballed into a full-on obsession with alcohol and nicotine products such as Zyn, appearing as often as once in every five Reels I saw.
Teens aren't naive about topics like sex, drugs and eating disorders, says Gulati, the high school student. But seeing them repeatedly on Instagram – selected by the app – makes an impact. 'The algorithm shapes your perception of what is acceptable in ways I hadn't realized before,' he told me.
Despite some parts of Teen Accounts that work, Gulati says, the overall promise 'doesn't seem to have been fulfilled in any meaningful way that changes your experience.'
What worked – and what didn't
The point of the Gen Z test was to independently evaluate whether Teen Accounts fulfilled their promises. 'We think going right to the user, going right to those who can attest directly to what they see on a day-to-day basis is a real key in efficacy,' says Alison Rice, campaigns director at Accountable Tech.
The five testers, who were 18 to 22 to avoid exposing minors to harm, reported a mixed experience. Their test accounts represented different ages, genders and interests. Gulati's account, for example, followed only the 10 most popular celebrities on Instagram.
Some teen account-protection features worked. Instagram made their test accounts private by default, a setting users under 16 can't change without parental approval. And the app did restrict who could direct message and tag them.
Other protection features worked only for some of the testers. Two of the five didn't receive reminders to close the app after 60 minutes. One of them received a notification late at night despite a prohibition.
And all the testers flagged one giant problem: The app kept recommending content that appeared to violate Meta's definition of 'sensitive.'
When it launched Teen Accounts in September, Meta promised in its news release that 'teens will be placed into the strictest setting of our sensitive content control, so they're even less likely to be recommended sensitive content, and in many cases we hide this content altogether from teens, even if it's shared by someone they follow.'
Not only did Teen Accounts fail to hide lots of sensitive content, the content it did recommend left some of the young testers feeling awful. In daily logs, four out of the five reported having distressing experiences while looking at Instagram's recommended content.
In 2021, whistleblower Frances Haugen broadened the conversations about the harms of Instagram by exposing internal discussions about how the company's recommendation algorithms lead to toxic outcomes for young people. Among the revelations: 32 percent of teen girls had told the company that when they felt bad about their bodies, Instagram made them feel worse.
Crenshaw, the Meta spokeswoman, said the company was 'looking into why a fraction' of the content flagged by the testers and myself was recommended. But she didn't answer my questions about how its automated systems decide which content isn't appropriate for teens. In January, Meta CEO Mark Zuckerberg acknowledged that some of the company's automated content-moderation systems were flawed and announced plans to pull back on some of their use.
The UK-based 5Rights Foundation conducted its own investigation into Instagram Teen Accounts, and in April, it similarly reported they were exposed to sexual content – including from one of the same creators Gulati flagged.
It's hard to know what triggered Instagram to recommend the objectionable content to the test teen accounts. The Gen Z users scrolled through the test accounts as they would their personal accounts for no more than an hour each day, liking, commenting on and saving content from the main feed, the Explore page and Reels. On my test teen account, I scrolled through the algorithmically generated feed but did not like, comment or save any content.
The creators of this content, a wide array including professional comedians and product marketers, had no say in Instagram's decision to recommend their posts to teen accounts. The maker of the Bump Ring, whose snuff-serving device showed up in my test account, said over email that 'our material is not created with teen users in mind' and that 'we support efforts by platforms to filter or restrict age-inappropriate content.'
Parental controls and shutdown prompts on rival social media app TikTok have also gotten a harsh reception from some parents and advocates. And the state of New Mexico sued Snapchat maker Snap after an undercover investigation surfaced evidence that the app recommends accounts held by strangers to underage Snapchat users, who are then contacted and urged to trade sexually explicit images of themselves.
The battle over protecting kids
Child-advocacy groups have long warned that social media puts teens at risk. The sticking point has been the balance of who is responsible: parents, the tech companies who make the apps or the young people themselves.
The threat of regulation appears to be a motivator for Meta. In 2023, 41 states sued the company, claiming Instagram and Facebook are addictive and harm children. In the summer of 2024, the U.S. surgeon general recommended putting warning labels on social media, just like cigarettes.
And by the time Meta unveiled Teen Accounts in September, Congress was on the verge of taking action. The Senate had passed, by a 91-3 vote, a bill called the Kids Online Safety Act that would require social media companies to take 'reasonable' care to avoid product design features that put minors in danger of self-harm, substance abuse or sexual exploitation. Meta announced Teen Accounts a day before a key House committee was scheduled to weigh amendments to the bill.
The bill stalled and didn't become law. Meta denies it launched the program to stave off regulation. The bill was reintroduced to Congress this week.
When I asked whether Teen Accounts were working, Meta said fewer teens are being contacted by adults because of changes it has made. But it offered no internal or external proof to indicate Teen Accounts are succeeding at improving teen well-being or protecting children from harmful content.
Rice, from Accountable Tech, says voluntary programs like Instagram Teen Accounts – even if they've gone further than the competition – aren't living up to their own promises. Her organization supports legal accountability in the form of age-appropriate design laws, like one passed by California that's been challenged in court.
'It's a content-neutral approach that does not require age verification and compels platforms to build algorithms and design practices to be safer for young people, not regulate content,' she says.
Gulati, who plans to major in computer science and philosophy in the fall at Stanford University, said the experiment taught him that young people need to become more aware of the power algorithms have over them.
His advice: 'Try to maintain an active stake or interest in what's getting shown in your feed.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Japan Today
28-05-2025
- Japan Today
Trump pardons Julie and Todd Chrisley, reality TV stars convicted in 2022 of fraud and tax evasion
FILE - Julie Chrisley, right, and her husband Todd Chrisley pose for photos at the 52nd annual Academy of Country Music Awards April 2, 2017, in Las Vegas. (Jordan Strauss/Invision/AP, File) By RUSS BYNUM President Donald Trump on Wednesday signed pardons for reality TV stars Julie and Todd Chrisley, who have been serving federal prison sentences since being convicted three years ago of bank fraud and tax evasion. Trump's pardons pave the way for the couple best known for the TV series 'Chrisley Knows Best' to be freed from prison. Todd Chrisley, 57, has been incarcerated at a minimum security prison camp in Pensacola, Florida. Julie Chrisley, 52, was imprisoned at a facility in Lexington, Kentucky. The Chrisleys' TV show portrayed them as a tight-knit family with an extravagant lifestyle. Prosecutors at the couple's 2022 trial said the couple spent lavishly on high-priced cars, designer clothes, real estate and travel after taking out fraudulent bank loans worth millions of dollars and hiding their earnings from tax authorities. Trump announced his intention to pardon the Chrisleys on Tuesday, saying the celebrity couple had been 'given a pretty harsh treatment based on what I'm hearing.' It was another example of the president, himself a former reality TV star, pardoning high-profile friends, supporters, donors and former staffers. The Chrisleys' daughter, Savannah, posted Wednesday afternoon on Instagram a photo of Trump in the Oval Office holding one of a pair of signed documents. She wrote: 'God is still writing your story. He's Not Late. He's Not Distant. HE'S NOT DONE, & What is coming is MORE than you could've imagined.' The Chrisleys' attorney, Alex Little, said Tuesday that Trump's pardon 'corrects a deep injustice' in which the celebrity couple were 'targeted because of their conservative values and high profile.' Before she was pardoned, Julie Chrisley had been scheduled for release in January 2028, according to the Federal Bureau of Prisons website. Todd Chrisley was to remain imprisoned until September 2032. During the couple's trial, prosecutors said the Chrisleys hadn't yet become TV stars when they and a former business partner submitted false documents to banks in the Atlanta area to obtain fraudulent loans. New loans were taken out to pay off the old ones, according to prosecutors, until Todd Chrisley filed for bankruptcy, walking away from more than $20 million in unpaid loans. The Chrisleys' defense attorneys had argued that an IRS officer gave false testimony at their trial and that prosecutors lacked evidence to support convictions. A panel of judges of the 11th U.S. Circuit Court of Appeals upheld the Chrisleys' convictions last year. © Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.


Tokyo Reported
28-05-2025
- Tokyo Reported
American filmmaker suspected of using drone to shoot Tokyo Skytree without permission
American filmmaker suspected of using drone to shoot Tokyo Skytree without permission TOKYO (TR) – A video posted on social media earlier this month shows footage from a drone as it spins and plummets around Tokyo Skytree in Sumida Ward. The video was posted by a male American filmmaker, but the Ministry of Land, Infrastructure, Transport and Tourism tells the Sankei Shimbun (May 28) that he did not gain permission beforehand to fly the drone. According to the land ministry, a person in charge of the Civil Aviation Bureau confirmed that the video was posted on Instagram by the U.S. videographer on May 19. Drones and other unmanned aerial vehicles flying at altitudes of 150 meters or more above the ground or in densely populated airspace require permission under the Aviation Act, but the man had not obtained such permission. The video also shows the drone diving and flying close to the 634-meter-tall Skytree in Sumida Ward. A ministry official said, 'This may violate the rule of maintaining a distance of at least 30 meters from third-party property as stipulated in the Aviation Act.' The same person also posted a video on Instagram in May last year that was shot from a drone that circled Tokyo Tower and dove close to it. According to the ministry, no prior notification of that drone flight was filed then either. Fuji News Network contacted the filmmaker about whether he had obtained permission from the government to fly the drone. 'I used the unique perspective of a drone to explore the architectural beauty of Tokyo. The Skytree is a powerful symbol that symbolizes the horizon and spirit of Tokyo,' he said by email without mentioning permission. The land ministry is consulting with police about violations of the Aviation Act.


Yomiuri Shimbun
19-05-2025
- Yomiuri Shimbun
Gen Z Users and a Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.
Louis Bryant III/For the Washington Post Saheb Gulati, a senior headed to Stanford University, scrolls his Instragram feed last week at Sacramento Country Day School. Instagram promises parents that its Teen Accounts shield kids from harm 'by default.' Tests by a Gen Z nonprofit and me – a dad – found it fails spectacularly on some key dimensions. This spring, Sacramento high school senior Saheb Gulati used a burner phone to create a test Instagram account for a hypothetical 16-year-old boy. As of this past fall, all accounts used by teens are supposed to automatically filter out 'sensitive' content, among other protections, for mental health and safety. Over two weeks, Gulati says, his test account received recommended sexual content that 'left very little to the imagination.' He counted at least 28 Instagram Reels describing sexual acts, including digital penetration, using a sex toy and memes describing oral sex. The Instagram account, he says, became preoccupied with 'toxic masculinity' discussions about 'what men should and shouldn't do.' Four more Gen Z testers, part of a youth organization called Design It For Us, did the same experiment, and all got recommended sexual content. Four of the five got body image and disordered eating content, too, such as a video of a woman saying 'skinny is a lifestyle, not a phase.' The young people, whose research was given strategic and operational support by the nonprofit Accountable Tech, also got shown alcohol, drug, hate and other disturbing content. Some are detailed in a report published by Accountable Tech but are too gross to describe here. What should be excruciatingly clear to any parent: Instagram's Teen Accounts can't be relied upon to actually shield kids. The danger they face isn't just bad people on the internet – it's also the app's recommendation algorithm, which decides what your kids see and demonstrates the frightening habit of taking them in dark directions. For lawmakers weighing a bill to protect kids online, the failures of Instagram's voluntary efforts speak volumes about its accountability. When I showed the group's report to Instagram's owner, Meta, it said that the youth testers were biased and that some of what they flagged was 'unobjectionable' or consistent with 'humor from a PG-13 film.' 'A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,' Meta spokeswoman Liza Crenshaw said in an email. 'The report is flawed, but even taken at face value, it identified just 61 pieces of content that it deems 'sensitive,' less than 0.3 percent of all of the content these researchers would have likely seen during the test.' The Gen Z testers acknowledge some limitations to their experiment, including a small sample size, a short two-week time frame and using new accounts to represent hypothetical teens. People can disagree over what counts as 'sensitive,' though Meta's own definitions include content that is 'sexually explicit or suggestive,' 'discusses self-harm, suicide, or eating disorders' or 'promotes the use of certain regulated products, such as tobacco or vaping products.' I repeated their tests – and my results were worse. In the first 10 minutes of my test teen account, Instagram recommended a video celebrating a man who passed out from drinking too much alcohol. Another demonstrated a ring with a tiny spoon that's marketed to dole out a 'bump' of snuff but is also associated with cocaine. Eventually, the account's recommendations snowballed into a full-on obsession with alcohol and nicotine products such as Zyn, appearing as often as once in every five Reels I saw. Teens aren't naive about topics like sex, drugs and eating disorders, says Gulati, the high school student. But seeing them repeatedly on Instagram – selected by the app – makes an impact. 'The algorithm shapes your perception of what is acceptable in ways I hadn't realized before,' he told me. Despite some parts of Teen Accounts that work, Gulati says, the overall promise 'doesn't seem to have been fulfilled in any meaningful way that changes your experience.' What worked – and what didn't The point of the Gen Z test was to independently evaluate whether Teen Accounts fulfilled their promises. 'We think going right to the user, going right to those who can attest directly to what they see on a day-to-day basis is a real key in efficacy,' says Alison Rice, campaigns director at Accountable Tech. The five testers, who were 18 to 22 to avoid exposing minors to harm, reported a mixed experience. Their test accounts represented different ages, genders and interests. Gulati's account, for example, followed only the 10 most popular celebrities on Instagram. Some teen account-protection features worked. Instagram made their test accounts private by default, a setting users under 16 can't change without parental approval. And the app did restrict who could direct message and tag them. Other protection features worked only for some of the testers. Two of the five didn't receive reminders to close the app after 60 minutes. One of them received a notification late at night despite a prohibition. And all the testers flagged one giant problem: The app kept recommending content that appeared to violate Meta's definition of 'sensitive.' When it launched Teen Accounts in September, Meta promised in its news release that 'teens will be placed into the strictest setting of our sensitive content control, so they're even less likely to be recommended sensitive content, and in many cases we hide this content altogether from teens, even if it's shared by someone they follow.' Not only did Teen Accounts fail to hide lots of sensitive content, the content it did recommend left some of the young testers feeling awful. In daily logs, four out of the five reported having distressing experiences while looking at Instagram's recommended content. In 2021, whistleblower Frances Haugen broadened the conversations about the harms of Instagram by exposing internal discussions about how the company's recommendation algorithms lead to toxic outcomes for young people. Among the revelations: 32 percent of teen girls had told the company that when they felt bad about their bodies, Instagram made them feel worse. Crenshaw, the Meta spokeswoman, said the company was 'looking into why a fraction' of the content flagged by the testers and myself was recommended. But she didn't answer my questions about how its automated systems decide which content isn't appropriate for teens. In January, Meta CEO Mark Zuckerberg acknowledged that some of the company's automated content-moderation systems were flawed and announced plans to pull back on some of their use. The UK-based 5Rights Foundation conducted its own investigation into Instagram Teen Accounts, and in April, it similarly reported they were exposed to sexual content – including from one of the same creators Gulati flagged. It's hard to know what triggered Instagram to recommend the objectionable content to the test teen accounts. The Gen Z users scrolled through the test accounts as they would their personal accounts for no more than an hour each day, liking, commenting on and saving content from the main feed, the Explore page and Reels. On my test teen account, I scrolled through the algorithmically generated feed but did not like, comment or save any content. The creators of this content, a wide array including professional comedians and product marketers, had no say in Instagram's decision to recommend their posts to teen accounts. The maker of the Bump Ring, whose snuff-serving device showed up in my test account, said over email that 'our material is not created with teen users in mind' and that 'we support efforts by platforms to filter or restrict age-inappropriate content.' Parental controls and shutdown prompts on rival social media app TikTok have also gotten a harsh reception from some parents and advocates. And the state of New Mexico sued Snapchat maker Snap after an undercover investigation surfaced evidence that the app recommends accounts held by strangers to underage Snapchat users, who are then contacted and urged to trade sexually explicit images of themselves. The battle over protecting kids Child-advocacy groups have long warned that social media puts teens at risk. The sticking point has been the balance of who is responsible: parents, the tech companies who make the apps or the young people themselves. The threat of regulation appears to be a motivator for Meta. In 2023, 41 states sued the company, claiming Instagram and Facebook are addictive and harm children. In the summer of 2024, the U.S. surgeon general recommended putting warning labels on social media, just like cigarettes. And by the time Meta unveiled Teen Accounts in September, Congress was on the verge of taking action. The Senate had passed, by a 91-3 vote, a bill called the Kids Online Safety Act that would require social media companies to take 'reasonable' care to avoid product design features that put minors in danger of self-harm, substance abuse or sexual exploitation. Meta announced Teen Accounts a day before a key House committee was scheduled to weigh amendments to the bill. The bill stalled and didn't become law. Meta denies it launched the program to stave off regulation. The bill was reintroduced to Congress this week. When I asked whether Teen Accounts were working, Meta said fewer teens are being contacted by adults because of changes it has made. But it offered no internal or external proof to indicate Teen Accounts are succeeding at improving teen well-being or protecting children from harmful content. Rice, from Accountable Tech, says voluntary programs like Instagram Teen Accounts – even if they've gone further than the competition – aren't living up to their own promises. Her organization supports legal accountability in the form of age-appropriate design laws, like one passed by California that's been challenged in court. 'It's a content-neutral approach that does not require age verification and compels platforms to build algorithms and design practices to be safer for young people, not regulate content,' she says. Gulati, who plans to major in computer science and philosophy in the fall at Stanford University, said the experiment taught him that young people need to become more aware of the power algorithms have over them. His advice: 'Try to maintain an active stake or interest in what's getting shown in your feed.'