logo
#

Latest news with #InstagramTeenAccounts

Gen Z users and a dad tested Instagram Teen Accounts. Their feeds were shocking.
Gen Z users and a dad tested Instagram Teen Accounts. Their feeds were shocking.

Yahoo

time19-05-2025

  • Yahoo

Gen Z users and a dad tested Instagram Teen Accounts. Their feeds were shocking.

Instagram promises parents that its Teen Accounts shield kids from harm 'by default.' Tests by a Gen Z nonprofit and me - a dad - found it fails spectacularly on some key dimensions. This spring, Sacramento high school senior Saheb Gulati used a burner phone to create a test Instagram account for a hypothetical 16-year-old boy. As of this past fall, all accounts used by teens are supposed to automatically filter out 'sensitive' content, among other protections, for mental health and safety. Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post. Over two weeks, Gulati says, his test account received recommended sexual content that 'left very little to the imagination.' He counted at least 28 Instagram Reels describing sexual acts, including digital penetration, using a sex toy and memes describing oral sex. The Instagram account, he says, became preoccupied with 'toxic masculinity' discussions about 'what men should and shouldn't do.' Four more Gen Z testers, part of a youth organization called Design It For Us, did the same experiment, and all got recommended sexual content. Four of the five got body image and disordered eating content, too, such as a video of a woman saying 'skinny is a lifestyle, not a phase.' The young people, whose research was given strategic and operational support by the nonprofit Accountable Tech, also got shown alcohol, drug, hate and other disturbing content. Some are detailed in a report published by Accountable Tech but are too gross to describe here. What should be excruciatingly clear to any parent: Instagram's Teen Accounts can't be relied upon to actually shield kids. The danger they face isn't just bad people on the internet - it's also the app's recommendation algorithm, which decides what your kids see and demonstrates the frightening habit of taking them in dark directions. For lawmakers weighing a bill to protect kids online, the failures of Instagram's voluntary efforts speak volumes about its accountability. When I showed the group's report to Instagram's owner, Meta, it said that the youth testers were biased and that some of what they flagged was 'unobjectionable' or consistent with 'humor from a PG-13 film.' 'A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,' Meta spokeswoman Liza Crenshaw said in an email. 'The report is flawed, but even taken at face value, it identified just 61 pieces of content that it deems 'sensitive,' less than 0.3 percent of all of the content these researchers would have likely seen during the test.' The Gen Z testers acknowledge some limitations to their experiment, including a small sample size, a short two-week time frame and using new accounts to represent hypothetical teens. People can disagree over what counts as 'sensitive,' though Meta's own definitions include content that is 'sexually explicit or suggestive,' 'discusses self-harm, suicide, or eating disorders' or 'promotes the use of certain regulated products, such as tobacco or vaping products.' I repeated their tests - and my results were worse. In the first 10 minutes of my test teen account, Instagram recommended a video celebrating a man who passed out from drinking too much alcohol. Another demonstrated a ring with a tiny spoon that's marketed to dole out a 'bump' of snuff but is also associated with cocaine. Eventually, the account's recommendations snowballed into a full-on obsession with alcohol and nicotine products such as Zyn, appearing as often as once in every five Reels I saw. Teens aren't naive about topics like sex, drugs and eating disorders, says Gulati, the high school student. But seeing them repeatedly on Instagram - selected by the app - makes an impact. 'The algorithm shapes your perception of what is acceptable in ways I hadn't realized before,' he told me. Despite some parts of Teen Accounts that work, Gulati says, the overall promise 'doesn't seem to have been fulfilled in any meaningful way that changes your experience.' - - - What worked - and what didn't The point of the Gen Z test was to independently evaluate whether Teen Accounts fulfilled their promises. 'We think going right to the user, going right to those who can attest directly to what they see on a day-to-day basis is a real key in efficacy,' says Alison Rice, campaigns director at Accountable Tech. The five testers, who were 18 to 22 to avoid exposing minors to harm, reported a mixed experience. Their test accounts represented different ages, genders and interests. Gulati's account, for example, followed only the 10 most popular celebrities on Instagram. Some teen account-protection features worked. Instagram made their test accounts private by default, a setting users under 16 can't change without parental approval. And the app did restrict who could direct message and tag them. Other protection features worked only for some of the testers. Two of the five didn't receive reminders to close the app after 60 minutes. One of them received a notification late at night despite a prohibition. And all the testers flagged one giant problem: The app kept recommending content that appeared to violate Meta's definition of 'sensitive.' When it launched Teen Accounts in September, Meta promised in its news release that 'teens will be placed into the strictest setting of our sensitive content control, so they're even less likely to be recommended sensitive content, and in many cases we hide this content altogether from teens, even if it's shared by someone they follow.' Not only did Teen Accounts fail to hide lots of sensitive content, the content it did recommend left some of the young testers feeling awful. In daily logs, four out of the five reported having distressing experiences while looking at Instagram's recommended content. In 2021, whistleblower Frances Haugen broadened the conversations about the harms of Instagram by exposing internal discussions about how the company's recommendation algorithms lead to toxic outcomes for young people. Among the revelations: 32 percent of teen girls had told the company that when they felt bad about their bodies, Instagram made them feel worse. Crenshaw, the Meta spokeswoman, said the company was 'looking into why a fraction' of the content flagged by the testers and myself was recommended. But she didn't answer my questions about how its automated systems decide which content isn't appropriate for teens. In January, Meta CEO Mark Zuckerberg acknowledged that some of the company's automated content-moderation systems were flawed and announced plans to pull back on some of their use. The UK-based 5Rights Foundation conducted its own investigation into Instagram Teen Accounts, and in April, it similarly reported they were exposed to sexual content - including from one of the same creators Gulati flagged. It's hard to know what triggered Instagram to recommend the objectionable content to the test teen accounts. The Gen Z users scrolled through the test accounts as they would their personal accounts for no more than an hour each day, liking, commenting on and saving content from the main feed, the Explore page and Reels. On my test teen account, I scrolled through the algorithmically generated feed but did not like, comment or save any content. The creators of this content, a wide array including professional comedians and product marketers, had no say in Instagram's decision to recommend their posts to teen accounts. The maker of the Bump Ring, whose snuff-serving device showed up in my test account, said over email that 'our material is not created with teen users in mind' and that 'we support efforts by platforms to filter or restrict age-inappropriate content.' Parental controls and shutdown prompts on rival social media app TikTok have also gotten a harsh reception from some parents and advocates. And the state of New Mexico sued Snapchat maker Snap after an undercover investigation surfaced evidence that the app recommends accounts held by strangers to underage Snapchat users, who are then contacted and urged to trade sexually explicit images of themselves. - - - The battle over protecting kids Child-advocacy groups have long warned that social media puts teens at risk. The sticking point has been the balance of who is responsible: parents, the tech companies who make the apps or the young people themselves. The threat of regulation appears to be a motivator for Meta. In 2023, 41 states sued the company, claiming Instagram and Facebook are addictive and harm children. In the summer of 2024, the U.S. surgeon general recommended putting warning labels on social media, just like cigarettes. And by the time Meta unveiled Teen Accounts in September, Congress was on the verge of taking action. The Senate had passed, by a 91-3 vote, a bill called the Kids Online Safety Act that would require social media companies to take 'reasonable' care to avoid product design features that put minors in danger of self-harm, substance abuse or sexual exploitation. Meta announced Teen Accounts a day before a key House committee was scheduled to weigh amendments to the bill. The bill stalled and didn't become law. Meta denies it launched the program to stave off regulation. The bill was reintroduced to Congress this week. When I asked whether Teen Accounts were working, Meta said fewer teens are being contacted by adults because of changes it has made. But it offered no internal or external proof to indicate Teen Accounts are succeeding at improving teen well-being or protecting children from harmful content. Rice, from Accountable Tech, says voluntary programs like Instagram Teen Accounts - even if they've gone further than the competition - aren't living up to their own promises. Her organization supports legal accountability in the form of age-appropriate design laws, like one passed by California that's been challenged in court. 'It's a content-neutral approach that does not require age verification and compels platforms to build algorithms and design practices to be safer for young people, not regulate content,' she says. Gulati, who plans to major in computer science and philosophy in the fall at Stanford University, said the experiment taught him that young people need to become more aware of the power algorithms have over them. His advice: 'Try to maintain an active stake or interest in what's getting shown in your feed.' Related Content An isolated, angry Fetterman is yet another challenge for Democrats As Republicans weigh Medicaid work requirements, Georgia offers a warning Harvard rejects Trump administration's claims as funding battle escalates

Gen Z Users and a Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.
Gen Z Users and a Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.

Yomiuri Shimbun

time19-05-2025

  • Yomiuri Shimbun

Gen Z Users and a Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.

Louis Bryant III/For the Washington Post Saheb Gulati, a senior headed to Stanford University, scrolls his Instragram feed last week at Sacramento Country Day School. Instagram promises parents that its Teen Accounts shield kids from harm 'by default.' Tests by a Gen Z nonprofit and me – a dad – found it fails spectacularly on some key dimensions. This spring, Sacramento high school senior Saheb Gulati used a burner phone to create a test Instagram account for a hypothetical 16-year-old boy. As of this past fall, all accounts used by teens are supposed to automatically filter out 'sensitive' content, among other protections, for mental health and safety. Over two weeks, Gulati says, his test account received recommended sexual content that 'left very little to the imagination.' He counted at least 28 Instagram Reels describing sexual acts, including digital penetration, using a sex toy and memes describing oral sex. The Instagram account, he says, became preoccupied with 'toxic masculinity' discussions about 'what men should and shouldn't do.' Four more Gen Z testers, part of a youth organization called Design It For Us, did the same experiment, and all got recommended sexual content. Four of the five got body image and disordered eating content, too, such as a video of a woman saying 'skinny is a lifestyle, not a phase.' The young people, whose research was given strategic and operational support by the nonprofit Accountable Tech, also got shown alcohol, drug, hate and other disturbing content. Some are detailed in a report published by Accountable Tech but are too gross to describe here. What should be excruciatingly clear to any parent: Instagram's Teen Accounts can't be relied upon to actually shield kids. The danger they face isn't just bad people on the internet – it's also the app's recommendation algorithm, which decides what your kids see and demonstrates the frightening habit of taking them in dark directions. For lawmakers weighing a bill to protect kids online, the failures of Instagram's voluntary efforts speak volumes about its accountability. When I showed the group's report to Instagram's owner, Meta, it said that the youth testers were biased and that some of what they flagged was 'unobjectionable' or consistent with 'humor from a PG-13 film.' 'A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,' Meta spokeswoman Liza Crenshaw said in an email. 'The report is flawed, but even taken at face value, it identified just 61 pieces of content that it deems 'sensitive,' less than 0.3 percent of all of the content these researchers would have likely seen during the test.' The Gen Z testers acknowledge some limitations to their experiment, including a small sample size, a short two-week time frame and using new accounts to represent hypothetical teens. People can disagree over what counts as 'sensitive,' though Meta's own definitions include content that is 'sexually explicit or suggestive,' 'discusses self-harm, suicide, or eating disorders' or 'promotes the use of certain regulated products, such as tobacco or vaping products.' I repeated their tests – and my results were worse. In the first 10 minutes of my test teen account, Instagram recommended a video celebrating a man who passed out from drinking too much alcohol. Another demonstrated a ring with a tiny spoon that's marketed to dole out a 'bump' of snuff but is also associated with cocaine. Eventually, the account's recommendations snowballed into a full-on obsession with alcohol and nicotine products such as Zyn, appearing as often as once in every five Reels I saw. Teens aren't naive about topics like sex, drugs and eating disorders, says Gulati, the high school student. But seeing them repeatedly on Instagram – selected by the app – makes an impact. 'The algorithm shapes your perception of what is acceptable in ways I hadn't realized before,' he told me. Despite some parts of Teen Accounts that work, Gulati says, the overall promise 'doesn't seem to have been fulfilled in any meaningful way that changes your experience.' What worked – and what didn't The point of the Gen Z test was to independently evaluate whether Teen Accounts fulfilled their promises. 'We think going right to the user, going right to those who can attest directly to what they see on a day-to-day basis is a real key in efficacy,' says Alison Rice, campaigns director at Accountable Tech. The five testers, who were 18 to 22 to avoid exposing minors to harm, reported a mixed experience. Their test accounts represented different ages, genders and interests. Gulati's account, for example, followed only the 10 most popular celebrities on Instagram. Some teen account-protection features worked. Instagram made their test accounts private by default, a setting users under 16 can't change without parental approval. And the app did restrict who could direct message and tag them. Other protection features worked only for some of the testers. Two of the five didn't receive reminders to close the app after 60 minutes. One of them received a notification late at night despite a prohibition. And all the testers flagged one giant problem: The app kept recommending content that appeared to violate Meta's definition of 'sensitive.' When it launched Teen Accounts in September, Meta promised in its news release that 'teens will be placed into the strictest setting of our sensitive content control, so they're even less likely to be recommended sensitive content, and in many cases we hide this content altogether from teens, even if it's shared by someone they follow.' Not only did Teen Accounts fail to hide lots of sensitive content, the content it did recommend left some of the young testers feeling awful. In daily logs, four out of the five reported having distressing experiences while looking at Instagram's recommended content. In 2021, whistleblower Frances Haugen broadened the conversations about the harms of Instagram by exposing internal discussions about how the company's recommendation algorithms lead to toxic outcomes for young people. Among the revelations: 32 percent of teen girls had told the company that when they felt bad about their bodies, Instagram made them feel worse. Crenshaw, the Meta spokeswoman, said the company was 'looking into why a fraction' of the content flagged by the testers and myself was recommended. But she didn't answer my questions about how its automated systems decide which content isn't appropriate for teens. In January, Meta CEO Mark Zuckerberg acknowledged that some of the company's automated content-moderation systems were flawed and announced plans to pull back on some of their use. The UK-based 5Rights Foundation conducted its own investigation into Instagram Teen Accounts, and in April, it similarly reported they were exposed to sexual content – including from one of the same creators Gulati flagged. It's hard to know what triggered Instagram to recommend the objectionable content to the test teen accounts. The Gen Z users scrolled through the test accounts as they would their personal accounts for no more than an hour each day, liking, commenting on and saving content from the main feed, the Explore page and Reels. On my test teen account, I scrolled through the algorithmically generated feed but did not like, comment or save any content. The creators of this content, a wide array including professional comedians and product marketers, had no say in Instagram's decision to recommend their posts to teen accounts. The maker of the Bump Ring, whose snuff-serving device showed up in my test account, said over email that 'our material is not created with teen users in mind' and that 'we support efforts by platforms to filter or restrict age-inappropriate content.' Parental controls and shutdown prompts on rival social media app TikTok have also gotten a harsh reception from some parents and advocates. And the state of New Mexico sued Snapchat maker Snap after an undercover investigation surfaced evidence that the app recommends accounts held by strangers to underage Snapchat users, who are then contacted and urged to trade sexually explicit images of themselves. The battle over protecting kids Child-advocacy groups have long warned that social media puts teens at risk. The sticking point has been the balance of who is responsible: parents, the tech companies who make the apps or the young people themselves. The threat of regulation appears to be a motivator for Meta. In 2023, 41 states sued the company, claiming Instagram and Facebook are addictive and harm children. In the summer of 2024, the U.S. surgeon general recommended putting warning labels on social media, just like cigarettes. And by the time Meta unveiled Teen Accounts in September, Congress was on the verge of taking action. The Senate had passed, by a 91-3 vote, a bill called the Kids Online Safety Act that would require social media companies to take 'reasonable' care to avoid product design features that put minors in danger of self-harm, substance abuse or sexual exploitation. Meta announced Teen Accounts a day before a key House committee was scheduled to weigh amendments to the bill. The bill stalled and didn't become law. Meta denies it launched the program to stave off regulation. The bill was reintroduced to Congress this week. When I asked whether Teen Accounts were working, Meta said fewer teens are being contacted by adults because of changes it has made. But it offered no internal or external proof to indicate Teen Accounts are succeeding at improving teen well-being or protecting children from harmful content. Rice, from Accountable Tech, says voluntary programs like Instagram Teen Accounts – even if they've gone further than the competition – aren't living up to their own promises. Her organization supports legal accountability in the form of age-appropriate design laws, like one passed by California that's been challenged in court. 'It's a content-neutral approach that does not require age verification and compels platforms to build algorithms and design practices to be safer for young people, not regulate content,' she says. Gulati, who plans to major in computer science and philosophy in the fall at Stanford University, said the experiment taught him that young people need to become more aware of the power algorithms have over them. His advice: 'Try to maintain an active stake or interest in what's getting shown in your feed.'

Instagram bans livestreaming for kids without parental consent
Instagram bans livestreaming for kids without parental consent

The Hill

time09-04-2025

  • The Hill

Instagram bans livestreaming for kids without parental consent

Meta announced on Tuesday new 'built-in protections' for its Instagram Teen Accounts, including a new requirement for parents to consent before children under 16 can go 'live' on the platform or unblur nudity in images they receive in direct messages. The updates expand on previous restrictions rolled out last year as part of parent company Meta's 'Instagram Teen Accounts' program, which came in response to heightened concern about the harmful effects of social media on children and teen mental health. With the new features announced Tuesday, teens under 16 will be prevented from using the Instagram Live feature without parental consent. They will also be required to get their parents' permission to turn off a feature that blurs images containing suspected nudity in direct messages. Meta said the updates will be available in the next couple of months. Meta also announced on Tuesday it will expand its teen account program to Facebook and Messenger. Teen accounts will first be available in the U.S. UK, Australia and Canada, before expanding to 'other regions soon.' Meta said the Facebook and Messenger Teen Accounts will include similar features to those included in the Instagram Teen Accounts, which launched in September. The Instagram Teen Account program includes expanded protections for users under 18, including making the accounts private by default, allowing direct messages only from people they follow or are connected to and limiting sensitive content young users see. Users also get notified when they've been on the app for more than 60 minutes, and 'sleep mode' is enabled at night to disable notifications and to auto-reply to direct messages. These features are automatically turned on for all teen accounts, but 16- and 17-year-olds can disable the features themselves, and children under 16 can do so with parental consent. Meta touted the success of the program, saying there are at least 54 million active Teen Accounts globally, since the program launched in September.

Fox News AI Newsletter: The dangers of oversharing with AI tools
Fox News AI Newsletter: The dangers of oversharing with AI tools

Fox News

time09-04-2025

  • Fox News

Fox News AI Newsletter: The dangers of oversharing with AI tools

IN TODAY'S NEWSLETTER: - Dangers of oversharing with AI tools - Instagram Teen Accounts unveils new built-in protections to block nudity, livestreams - 'Sound of Freedom' producer says AI tools helped nab child trafficker that eluded FBI for 10 years DON'T OVERSHARE DEETS: Have you ever stopped to think about how much your chatbot knows about you? Over the years, tools like ChatGPThave become incredibly adept at learning your preferences, habits and even some of your deepest secrets. But while this can make them seem more helpful and personalized, it also raises some serious privacy concerns. As much as you learn from these AI tools, they learn just as much about you. GREATER CONTROL: Instagram on Tuesday announced new built-in protections for Instagram Teen Accounts and has expanded its suite of features to the Facebook and Messenger applications. MAJOR VICTORY: Child predators are on high alert as organizations around the globe have begun rolling out artificial intelligence tools to bring sex traffickers to justice and rescue young victims, according to "Sound of Freedom" executive producer Paul Hutchinson. INDUSTRIAL SUPER-HUMANOID ROBOT: In a groundbreaking development, California-based robotics and artificial intelligence company Dexterity has unveiled Mech, the world's first industrial super-humanoid robot. FOLLOW FOX NEWS ON SOCIAL MEDIA FacebookInstagramYouTubeTwitterLinkedIn SIGN UP FOR OUR OTHER NEWSLETTERS Fox News FirstFox News OpinionFox News LifestyleFox News Health DOWNLOAD OUR APPS Fox NewsFox BusinessFox WeatherFox SportsTubi WATCH FOX NEWS ONLINE STREAM FOX NATION Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store