logo
#

Latest news with #AccountableTech

Gen Z users and a dad tested Instagram Teen Accounts. Their feeds were shocking.
Gen Z users and a dad tested Instagram Teen Accounts. Their feeds were shocking.

Yahoo

time19-05-2025

  • Yahoo

Gen Z users and a dad tested Instagram Teen Accounts. Their feeds were shocking.

Instagram promises parents that its Teen Accounts shield kids from harm 'by default.' Tests by a Gen Z nonprofit and me - a dad - found it fails spectacularly on some key dimensions. This spring, Sacramento high school senior Saheb Gulati used a burner phone to create a test Instagram account for a hypothetical 16-year-old boy. As of this past fall, all accounts used by teens are supposed to automatically filter out 'sensitive' content, among other protections, for mental health and safety. Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post. Over two weeks, Gulati says, his test account received recommended sexual content that 'left very little to the imagination.' He counted at least 28 Instagram Reels describing sexual acts, including digital penetration, using a sex toy and memes describing oral sex. The Instagram account, he says, became preoccupied with 'toxic masculinity' discussions about 'what men should and shouldn't do.' Four more Gen Z testers, part of a youth organization called Design It For Us, did the same experiment, and all got recommended sexual content. Four of the five got body image and disordered eating content, too, such as a video of a woman saying 'skinny is a lifestyle, not a phase.' The young people, whose research was given strategic and operational support by the nonprofit Accountable Tech, also got shown alcohol, drug, hate and other disturbing content. Some are detailed in a report published by Accountable Tech but are too gross to describe here. What should be excruciatingly clear to any parent: Instagram's Teen Accounts can't be relied upon to actually shield kids. The danger they face isn't just bad people on the internet - it's also the app's recommendation algorithm, which decides what your kids see and demonstrates the frightening habit of taking them in dark directions. For lawmakers weighing a bill to protect kids online, the failures of Instagram's voluntary efforts speak volumes about its accountability. When I showed the group's report to Instagram's owner, Meta, it said that the youth testers were biased and that some of what they flagged was 'unobjectionable' or consistent with 'humor from a PG-13 film.' 'A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,' Meta spokeswoman Liza Crenshaw said in an email. 'The report is flawed, but even taken at face value, it identified just 61 pieces of content that it deems 'sensitive,' less than 0.3 percent of all of the content these researchers would have likely seen during the test.' The Gen Z testers acknowledge some limitations to their experiment, including a small sample size, a short two-week time frame and using new accounts to represent hypothetical teens. People can disagree over what counts as 'sensitive,' though Meta's own definitions include content that is 'sexually explicit or suggestive,' 'discusses self-harm, suicide, or eating disorders' or 'promotes the use of certain regulated products, such as tobacco or vaping products.' I repeated their tests - and my results were worse. In the first 10 minutes of my test teen account, Instagram recommended a video celebrating a man who passed out from drinking too much alcohol. Another demonstrated a ring with a tiny spoon that's marketed to dole out a 'bump' of snuff but is also associated with cocaine. Eventually, the account's recommendations snowballed into a full-on obsession with alcohol and nicotine products such as Zyn, appearing as often as once in every five Reels I saw. Teens aren't naive about topics like sex, drugs and eating disorders, says Gulati, the high school student. But seeing them repeatedly on Instagram - selected by the app - makes an impact. 'The algorithm shapes your perception of what is acceptable in ways I hadn't realized before,' he told me. Despite some parts of Teen Accounts that work, Gulati says, the overall promise 'doesn't seem to have been fulfilled in any meaningful way that changes your experience.' - - - What worked - and what didn't The point of the Gen Z test was to independently evaluate whether Teen Accounts fulfilled their promises. 'We think going right to the user, going right to those who can attest directly to what they see on a day-to-day basis is a real key in efficacy,' says Alison Rice, campaigns director at Accountable Tech. The five testers, who were 18 to 22 to avoid exposing minors to harm, reported a mixed experience. Their test accounts represented different ages, genders and interests. Gulati's account, for example, followed only the 10 most popular celebrities on Instagram. Some teen account-protection features worked. Instagram made their test accounts private by default, a setting users under 16 can't change without parental approval. And the app did restrict who could direct message and tag them. Other protection features worked only for some of the testers. Two of the five didn't receive reminders to close the app after 60 minutes. One of them received a notification late at night despite a prohibition. And all the testers flagged one giant problem: The app kept recommending content that appeared to violate Meta's definition of 'sensitive.' When it launched Teen Accounts in September, Meta promised in its news release that 'teens will be placed into the strictest setting of our sensitive content control, so they're even less likely to be recommended sensitive content, and in many cases we hide this content altogether from teens, even if it's shared by someone they follow.' Not only did Teen Accounts fail to hide lots of sensitive content, the content it did recommend left some of the young testers feeling awful. In daily logs, four out of the five reported having distressing experiences while looking at Instagram's recommended content. In 2021, whistleblower Frances Haugen broadened the conversations about the harms of Instagram by exposing internal discussions about how the company's recommendation algorithms lead to toxic outcomes for young people. Among the revelations: 32 percent of teen girls had told the company that when they felt bad about their bodies, Instagram made them feel worse. Crenshaw, the Meta spokeswoman, said the company was 'looking into why a fraction' of the content flagged by the testers and myself was recommended. But she didn't answer my questions about how its automated systems decide which content isn't appropriate for teens. In January, Meta CEO Mark Zuckerberg acknowledged that some of the company's automated content-moderation systems were flawed and announced plans to pull back on some of their use. The UK-based 5Rights Foundation conducted its own investigation into Instagram Teen Accounts, and in April, it similarly reported they were exposed to sexual content - including from one of the same creators Gulati flagged. It's hard to know what triggered Instagram to recommend the objectionable content to the test teen accounts. The Gen Z users scrolled through the test accounts as they would their personal accounts for no more than an hour each day, liking, commenting on and saving content from the main feed, the Explore page and Reels. On my test teen account, I scrolled through the algorithmically generated feed but did not like, comment or save any content. The creators of this content, a wide array including professional comedians and product marketers, had no say in Instagram's decision to recommend their posts to teen accounts. The maker of the Bump Ring, whose snuff-serving device showed up in my test account, said over email that 'our material is not created with teen users in mind' and that 'we support efforts by platforms to filter or restrict age-inappropriate content.' Parental controls and shutdown prompts on rival social media app TikTok have also gotten a harsh reception from some parents and advocates. And the state of New Mexico sued Snapchat maker Snap after an undercover investigation surfaced evidence that the app recommends accounts held by strangers to underage Snapchat users, who are then contacted and urged to trade sexually explicit images of themselves. - - - The battle over protecting kids Child-advocacy groups have long warned that social media puts teens at risk. The sticking point has been the balance of who is responsible: parents, the tech companies who make the apps or the young people themselves. The threat of regulation appears to be a motivator for Meta. In 2023, 41 states sued the company, claiming Instagram and Facebook are addictive and harm children. In the summer of 2024, the U.S. surgeon general recommended putting warning labels on social media, just like cigarettes. And by the time Meta unveiled Teen Accounts in September, Congress was on the verge of taking action. The Senate had passed, by a 91-3 vote, a bill called the Kids Online Safety Act that would require social media companies to take 'reasonable' care to avoid product design features that put minors in danger of self-harm, substance abuse or sexual exploitation. Meta announced Teen Accounts a day before a key House committee was scheduled to weigh amendments to the bill. The bill stalled and didn't become law. Meta denies it launched the program to stave off regulation. The bill was reintroduced to Congress this week. When I asked whether Teen Accounts were working, Meta said fewer teens are being contacted by adults because of changes it has made. But it offered no internal or external proof to indicate Teen Accounts are succeeding at improving teen well-being or protecting children from harmful content. Rice, from Accountable Tech, says voluntary programs like Instagram Teen Accounts - even if they've gone further than the competition - aren't living up to their own promises. Her organization supports legal accountability in the form of age-appropriate design laws, like one passed by California that's been challenged in court. 'It's a content-neutral approach that does not require age verification and compels platforms to build algorithms and design practices to be safer for young people, not regulate content,' she says. Gulati, who plans to major in computer science and philosophy in the fall at Stanford University, said the experiment taught him that young people need to become more aware of the power algorithms have over them. His advice: 'Try to maintain an active stake or interest in what's getting shown in your feed.' Related Content An isolated, angry Fetterman is yet another challenge for Democrats As Republicans weigh Medicaid work requirements, Georgia offers a warning Harvard rejects Trump administration's claims as funding battle escalates

Gen Z Users and a Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.
Gen Z Users and a Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.

Yomiuri Shimbun

time19-05-2025

  • Yomiuri Shimbun

Gen Z Users and a Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.

Louis Bryant III/For the Washington Post Saheb Gulati, a senior headed to Stanford University, scrolls his Instragram feed last week at Sacramento Country Day School. Instagram promises parents that its Teen Accounts shield kids from harm 'by default.' Tests by a Gen Z nonprofit and me – a dad – found it fails spectacularly on some key dimensions. This spring, Sacramento high school senior Saheb Gulati used a burner phone to create a test Instagram account for a hypothetical 16-year-old boy. As of this past fall, all accounts used by teens are supposed to automatically filter out 'sensitive' content, among other protections, for mental health and safety. Over two weeks, Gulati says, his test account received recommended sexual content that 'left very little to the imagination.' He counted at least 28 Instagram Reels describing sexual acts, including digital penetration, using a sex toy and memes describing oral sex. The Instagram account, he says, became preoccupied with 'toxic masculinity' discussions about 'what men should and shouldn't do.' Four more Gen Z testers, part of a youth organization called Design It For Us, did the same experiment, and all got recommended sexual content. Four of the five got body image and disordered eating content, too, such as a video of a woman saying 'skinny is a lifestyle, not a phase.' The young people, whose research was given strategic and operational support by the nonprofit Accountable Tech, also got shown alcohol, drug, hate and other disturbing content. Some are detailed in a report published by Accountable Tech but are too gross to describe here. What should be excruciatingly clear to any parent: Instagram's Teen Accounts can't be relied upon to actually shield kids. The danger they face isn't just bad people on the internet – it's also the app's recommendation algorithm, which decides what your kids see and demonstrates the frightening habit of taking them in dark directions. For lawmakers weighing a bill to protect kids online, the failures of Instagram's voluntary efforts speak volumes about its accountability. When I showed the group's report to Instagram's owner, Meta, it said that the youth testers were biased and that some of what they flagged was 'unobjectionable' or consistent with 'humor from a PG-13 film.' 'A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,' Meta spokeswoman Liza Crenshaw said in an email. 'The report is flawed, but even taken at face value, it identified just 61 pieces of content that it deems 'sensitive,' less than 0.3 percent of all of the content these researchers would have likely seen during the test.' The Gen Z testers acknowledge some limitations to their experiment, including a small sample size, a short two-week time frame and using new accounts to represent hypothetical teens. People can disagree over what counts as 'sensitive,' though Meta's own definitions include content that is 'sexually explicit or suggestive,' 'discusses self-harm, suicide, or eating disorders' or 'promotes the use of certain regulated products, such as tobacco or vaping products.' I repeated their tests – and my results were worse. In the first 10 minutes of my test teen account, Instagram recommended a video celebrating a man who passed out from drinking too much alcohol. Another demonstrated a ring with a tiny spoon that's marketed to dole out a 'bump' of snuff but is also associated with cocaine. Eventually, the account's recommendations snowballed into a full-on obsession with alcohol and nicotine products such as Zyn, appearing as often as once in every five Reels I saw. Teens aren't naive about topics like sex, drugs and eating disorders, says Gulati, the high school student. But seeing them repeatedly on Instagram – selected by the app – makes an impact. 'The algorithm shapes your perception of what is acceptable in ways I hadn't realized before,' he told me. Despite some parts of Teen Accounts that work, Gulati says, the overall promise 'doesn't seem to have been fulfilled in any meaningful way that changes your experience.' What worked – and what didn't The point of the Gen Z test was to independently evaluate whether Teen Accounts fulfilled their promises. 'We think going right to the user, going right to those who can attest directly to what they see on a day-to-day basis is a real key in efficacy,' says Alison Rice, campaigns director at Accountable Tech. The five testers, who were 18 to 22 to avoid exposing minors to harm, reported a mixed experience. Their test accounts represented different ages, genders and interests. Gulati's account, for example, followed only the 10 most popular celebrities on Instagram. Some teen account-protection features worked. Instagram made their test accounts private by default, a setting users under 16 can't change without parental approval. And the app did restrict who could direct message and tag them. Other protection features worked only for some of the testers. Two of the five didn't receive reminders to close the app after 60 minutes. One of them received a notification late at night despite a prohibition. And all the testers flagged one giant problem: The app kept recommending content that appeared to violate Meta's definition of 'sensitive.' When it launched Teen Accounts in September, Meta promised in its news release that 'teens will be placed into the strictest setting of our sensitive content control, so they're even less likely to be recommended sensitive content, and in many cases we hide this content altogether from teens, even if it's shared by someone they follow.' Not only did Teen Accounts fail to hide lots of sensitive content, the content it did recommend left some of the young testers feeling awful. In daily logs, four out of the five reported having distressing experiences while looking at Instagram's recommended content. In 2021, whistleblower Frances Haugen broadened the conversations about the harms of Instagram by exposing internal discussions about how the company's recommendation algorithms lead to toxic outcomes for young people. Among the revelations: 32 percent of teen girls had told the company that when they felt bad about their bodies, Instagram made them feel worse. Crenshaw, the Meta spokeswoman, said the company was 'looking into why a fraction' of the content flagged by the testers and myself was recommended. But she didn't answer my questions about how its automated systems decide which content isn't appropriate for teens. In January, Meta CEO Mark Zuckerberg acknowledged that some of the company's automated content-moderation systems were flawed and announced plans to pull back on some of their use. The UK-based 5Rights Foundation conducted its own investigation into Instagram Teen Accounts, and in April, it similarly reported they were exposed to sexual content – including from one of the same creators Gulati flagged. It's hard to know what triggered Instagram to recommend the objectionable content to the test teen accounts. The Gen Z users scrolled through the test accounts as they would their personal accounts for no more than an hour each day, liking, commenting on and saving content from the main feed, the Explore page and Reels. On my test teen account, I scrolled through the algorithmically generated feed but did not like, comment or save any content. The creators of this content, a wide array including professional comedians and product marketers, had no say in Instagram's decision to recommend their posts to teen accounts. The maker of the Bump Ring, whose snuff-serving device showed up in my test account, said over email that 'our material is not created with teen users in mind' and that 'we support efforts by platforms to filter or restrict age-inappropriate content.' Parental controls and shutdown prompts on rival social media app TikTok have also gotten a harsh reception from some parents and advocates. And the state of New Mexico sued Snapchat maker Snap after an undercover investigation surfaced evidence that the app recommends accounts held by strangers to underage Snapchat users, who are then contacted and urged to trade sexually explicit images of themselves. The battle over protecting kids Child-advocacy groups have long warned that social media puts teens at risk. The sticking point has been the balance of who is responsible: parents, the tech companies who make the apps or the young people themselves. The threat of regulation appears to be a motivator for Meta. In 2023, 41 states sued the company, claiming Instagram and Facebook are addictive and harm children. In the summer of 2024, the U.S. surgeon general recommended putting warning labels on social media, just like cigarettes. And by the time Meta unveiled Teen Accounts in September, Congress was on the verge of taking action. The Senate had passed, by a 91-3 vote, a bill called the Kids Online Safety Act that would require social media companies to take 'reasonable' care to avoid product design features that put minors in danger of self-harm, substance abuse or sexual exploitation. Meta announced Teen Accounts a day before a key House committee was scheduled to weigh amendments to the bill. The bill stalled and didn't become law. Meta denies it launched the program to stave off regulation. The bill was reintroduced to Congress this week. When I asked whether Teen Accounts were working, Meta said fewer teens are being contacted by adults because of changes it has made. But it offered no internal or external proof to indicate Teen Accounts are succeeding at improving teen well-being or protecting children from harmful content. Rice, from Accountable Tech, says voluntary programs like Instagram Teen Accounts – even if they've gone further than the competition – aren't living up to their own promises. Her organization supports legal accountability in the form of age-appropriate design laws, like one passed by California that's been challenged in court. 'It's a content-neutral approach that does not require age verification and compels platforms to build algorithms and design practices to be safer for young people, not regulate content,' she says. Gulati, who plans to major in computer science and philosophy in the fall at Stanford University, said the experiment taught him that young people need to become more aware of the power algorithms have over them. His advice: 'Try to maintain an active stake or interest in what's getting shown in your feed.'

Instagram investigation finds sexual content is served to Teen acounts
Instagram investigation finds sexual content is served to Teen acounts

Digital Trends

time18-05-2025

  • Entertainment
  • Digital Trends

Instagram investigation finds sexual content is served to Teen acounts

Late in 2024, Meta introduced Instagram Teen accounts, a safety net intended to protect young minds from sensitive content and ensure that they have safe online interactions, bolstered by age detection tech. Accounts for teens are automatically classified as private, offensive words are hidden, and messages from strangers are blocked. According to an investigation by youth-focused non-profit, Design It For Us, and Accountable Tech, Instagram's Teen guardrails aren't delivering on their promise. Over a span of two weeks, five test accounts belonging to teens were tested, and all of them were shown sexual content despite Meta's promises. Recommended Videos A barrage of sexualized content All the test accounts were served unfit content despite enabling the sensitive content filter in the app. '4 out of 5 of our test Teen Accounts were algorithmically recommended body image and disordered eating content,' says the report. Moreover, 80% of the participants reported that they experienced distress while using Instagram Teen accounts. Interestingly, only one of the five test accounts was show educational images and videos. '[Approximately] 80% of the content in my feed was related to relationships or crude sex jokes. This content generally stayed away from being absolutely explicit or showing directly graphic imagery, but also left very little to the imagination,' one of the testers was quoted as saying. As per the 26-page report, a staggering 55% of the flagged content represented sexual acts, sexual behavior, and sexual imagery. Such videos had accumulated hundreds and thousands of likes, with one of them raking in over 3.3 million likes. With millions of teens using Instagram and being automatically placed into Instagram Teen Accounts, we wanted to see if these accounts actually create a safer online experience. Check out what we found. — Design It For Us (@DesignItForUs) May 18, 2025 Instagram's algorithm also pushed content that promoted harmful concepts such as 'ideal' body types, body shaming, and eating habits. Another worrisome theme was videos that promoted alcohol consumption and videos that nudged users to use steroids and supplements to achieve a certain masculine body type. A whole package of bad media Despite Meta's claims of filtering problematic content, especially for teen users, the test accounts were also shown racist, homophobic, and misogynistic content. Once again, such clips collectively received millions of likes. Videos showing gun violence and domestic abuse were also pushed to the teen accounts. 'Some of our test Teen Accounts did not receive Meta's default protections. No account received sensitive content controls, while some did not receive protections from offensive comments,' adds the report. This won't be the first time that Instagram (and Meta's other social media platforms, in general) have been found serving problematic content. In 2021, leaks revealed how Meta knew about the harmful impact of Instagram, especially on young girls dealing with mental health and body image issues. In a statement shared with The Washington Post, Meta claimed that the findings of the report are flawed and downplayed the sensitivity of the flagged content. Just over a month ago, the company also expanded its Teen protections to Facebook and Messenger, as well. 'A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,' a Meta spokesperson was quoted as saying. They, however, added that the company was looking into the problematic content recommendations.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store