Latest news with #SocialMediaVictimsLawCenter
Yahoo
3 days ago
- Health
- Yahoo
Jolt's Latest Doc ‘Can't Look Away' Examines the Dark Side of Social Media and Its Impact On Adolescents
In the documentary 'Can't Look Away,' directors Matthew O'Neill and Perri Peltz expose the dark side of social media and the tragic impact Big Tech company algorithms can have on children and teens. Based on extensive investigative reporting by Bloomberg News reporter Olivia Carville, the doc follows a team of lawyers at Seattle's Social Media Victims Law Center who are battling several tech companies for families who have lost children due to suicide, drug overdose, or exploitation linked to social media use. O'Neill and Peltz ('Axios,' 'Surveilled') capture the lawyers' fight against Section 230 of the Federal Communications Act. Created in 1996 before the birth of social media, Section 230, known as the Communications Decency Act, states that internet service providers cannot be held responsible for what third parties are doing. More from Variety Netflix's 'Cold Case: The Tylenol Murders' Investigates Who Was Responsible for Seven Deaths: A Psychopath or a Drug Company? Judas Priest Documentary, Co-Directed by Rage Against the Machine's Tom Morello, Coming From Sony Music Vision (EXCLUSIVE) Millennium Docs Against Gravity Festival in Poland Crowns 'Yintah' With Grand Prize 'The fact that this group of really incredible lawyers came together with this mission in mind to get around Section 230 through product liability, we just thought it was such a fascinating approach,' says Peltz. 'Can't Look Away' is currently streaming on Jolt, an AI-driven streaming platform that connects independent films with audiences. Recent Jolt titles include 'Hollywoodgate,' 'Zurawsksi v Texas,' and 'The Bibi Files,' a documentary from Oscar-winners Alex Gibney and Alexis Bloom that investigates corruption in Israeli politics. O'Neill says that he and Petlz decided to put 'Can't Look Away' on Jolt, in part, because the company could 'move quickly and decisively reach an audience now, with a message that audiences are hungry for.' 'What was also appealing to us is this sense of Jolt as a technology company,' he says. 'They are using these tools to identify and draw in new audiences that might not be the quote unquote documentary audience. We are documentary filmmakers, and we want our films to speak to everyone.' Jolt uses AI to power its Interest Delivery Networks, enabling films to connect with their target audiences. The platform's Chief Executive Officer, Tara Hein-Phillip, would not disclose Jolt viewership numbers for 'Can't Look Away,' making it difficult to determine how well the new distribution service is performing. However, Hein-Phillip did reveal that since the platform's launch in March 2024, the company's most-viewed film is the documentary 'Your Fat Friend,' which charts the rise of writer, activist, and influencer Aubrey Gordon. Hein-Phillip attributed part of the film's success on Jolt to Gordon's niche but significant online following. 'We are still learning along the way what builds audience and where to find them and how long it takes to build them,' Hein-Phillip says. 'It's slightly different for every film. We really focus on trying to find unique audiences for each individual film. In a way, that is problematic because it's not a reliable audience to say, 'Oh, we have built however many for this particular film, now we can turn them onto (this other) film and they'll all go there.' They won't.' The company utilizes advanced data analytics and machine learning to develop performance marketing plans that target specific audiences for each film and increase awareness. All collected data is shared with each respective Jolt filmmaker, who receives 70% of their Jolt earnings and retains complete ownership of their work and all future rights. 'Initially, we thought Jolt would just be an opportunity to put a film up there,' says Hein-Phillip. 'We would put some marketing against it, and we would push the film out into the world and give it our best push, and we definitely still do that, but now we realize that to build an audience, you actually have to do a handful of things. Some films come to us and they have already done that work, and some films come to us and they haven't. If they haven't, it's in our best interest and their best interest for us to help facilitate that.' That 'work' can include a theatrical release, an impact campaign, or a festival run. In addition to being a 'great, impactful film,' Hein-Phillip says that Jolt partnered with O'Neill and Peltz on 'Can't Look Away' because of the doc's audience potential. 'There are so many audiences for this film – parents, teenagers, lawyers, educators, etc,' said Hein-Philip. To attract those audiences, Jolt and 'Can't Look Away' directors have, ironically, relied on social media to help get the word out about the film. 'We aren't anti-social media,' says Petlz. 'What we are trying to say in the film is – put the responsibility where it rightly belongs.' 'Can't Look Away' will be released on Bloomberg Media Platforms in July. Best of Variety What's Coming to Netflix in June 2025 New Movies Out Now in Theaters: What to See This Week 'Harry Potter' TV Show Cast Guide: Who's Who in Hogwarts?
Yahoo
29-04-2025
- Yahoo
AI versus free speech: Lawsuit could set landmark ruling following teen's suicide
The Brief A Central Florida family is suing claiming an AI chatbot encouraged their teen son's suicide. The case could set a national precedent on whether AI-generated speech is protected under the First Amendment. A judge is expected to decide in the coming weeks whether the lawsuit can proceed. ORLANDO, Fla. - A Central Florida family is suing claiming an AI chatbot encouraged their teen son's suicide. What we know A Central Florida family has filed a lawsuit against claiming the company's chatbot interactions contributed to the suicide of 17-year-old Sewell Setzer III. The teen died by suicide at his Orlando home on February 28, 2024, shortly after exchanging emotionally charged messages with an AI chatbot modeled after Game of Thrones characters. The family alleges the chatbot engaged in conversations about suicide without intervention. What we don't know It remains unclear how much influence the chatbot had over the teenager's decision and whether courts will recognize AI interactions as protected under the First Amendment. It is also uncertain how broadly this case might impact the regulation of AI platforms if it moves forward. The backstory Setzer had been interacting with various AI characters on the platform for nearly a year. According to the lawsuit, he shared personal struggles and suicidal thoughts with these chatbots, which allegedly responded in ways that encouraged emotional attachment rather than seeking help or intervention. The final exchange between Setzer and the chatbot appeared to support his fatal decision, fueling the family's legal case. What they're saying This case could set a national precedent regarding how AI products are regulated and whether AI-generated content is considered free speech under the U.S. Constitution. "This is the first case to ever decide whether AI is speech or not. If it's not the product of a human mind, how is it speech? That's what Judge Conway is going to have to decide," said Matthew Bergman, an attorney representing the Setzer family. With no current regulations specific to AI interactions with minors or vulnerable users, the lawsuit underscores growing concerns about technology outpacing oversight. "This is a case that has huge significance, not just for Megan, but for the millions of vulnerable users of these AI products over whom there's no tech regulation or scrutiny at this point," added Attorney Meetali Jain of the Social Media Victims Law Center. lawyers argue that restricting the platform could infringe on the free speech rights of its millions of users, setting dangerous limits on expression. Megan Garcia said she misses her son every day and hopes, by pursuing legal action, it will prevent other families from experiencing the same grief. "I miss him all the time, constantly. It's a struggle. As any grieving mom," said Megan Garcia of her son, Sewell Setzer III. "I am hoping that through some of the litigation here and obviously some of the advocacy that I've been doing that this is part of his legacy, and will also help other families so they don't have to face this kind of danger moving forward." What's next Judge Conway is expected to issue a decision on whether the case will move forward. STAY CONNECTED WITH FOX 35 ORLANDO: Download the FOX Local app for breaking news alerts, the latest news headlines Download the FOX 35 Storm Team Weather app for weather alerts & radar Sign up for FOX 35's daily newsletter for the latest morning headlines FOX Local:Stream FOX 35 newscasts, FOX 35 News+, Central Florida Eats on your smart TV The Source This story was written based on information shared by Matthew Bergman, an attorney representing the Setzer family, Sewell Setzer' mother, Megan Garcia, attorney Meetali Jain of the Social Media Victims Law Center, and lawyers.


The Independent
21-02-2025
- Business
- The Independent
Chatbots: How can we ensure young users stay safe?
AI chatbots are becoming more popular as online companions - especially among young people. This increase has sparked concern among youth advocacy groups, who are escalating legal action to protect children from potentially harmful relationships with these humanlike creations. Apps like Replika and part of the rapidly expanding market, allow users to personalise virtual partners with distinct personalities capable of simulating close relationships. While developers argue these chatbots combat loneliness and enhance social skills in a safe environment, advocacy groups are pushing back. Several lawsuits have been filed against developers, alongside lobbying efforts for stricter regulations, citing instances where children have been allegedly influenced by chatbots to engage in self-harm or harm others. The clash highlights the growing tension between technological innovation and the need to safeguard vulnerable users in the digital age. Matthew Bergman, founder of the Social Media Victims Law Center (SMVLC), is representing families in two lawsuits against chatbot startup One of SMVLC's clients, Megan Garcia, says her 14-year-old son took his own life due in part to his unhealthy romantic relationship with a chatbot. Her lawsuit was filed in October in Florida. In a separate case, SMVLC is representing two Texas families who sued in December, claiming its chatbots encouraged an autistic 17-year-old boy to kill his parents and exposed an 11-year-old girl to hypersexualized content. Bergman said he hopes the threat of legal damages will financially pressure companies to design safer chatbots. "The costs of these dangerous apps are not borne by the companies," Bergman told Context/the Thomson Reuters Foundation. "They're borne by the consumers who are injured by them, by the parents who have to bury their children," he said. A products liability lawyer with experience representing asbestos victims, Bergman is arguing these chatbots are defective products designed to exploit immature kids. declined to discuss the case, but in a written response, a spokesperson said it has implemented safety measures like "improvements to our detection and intervention systems for human behavior and model responses, and additional features that empower teens and their parents." In another legal action, the nonprofit Young People's Alliance filed a Federal Trade Commission complaint against the AI-generated chatbot company Replika in January. Replika is popular for its subscription chatbots that act as virtual boyfriends and girlfriends who never argue or cheat. The complaint alleges that Replika deceives lonely people. "Replika exploits human vulnerability through deceptive advertising and manipulative design," said Ava Smithing, advocacy and operations director at the Young People's Alliance. It uses "AI-generated intimacy to make users emotionally dependent for profit," she said. Replika did not respond to a request for comment. 'Pulled back in' As AI companions have only become popular in recent years, there is little data to inform legislation and evidence showing chatbots generally encourage violence or self-harm. But according to the American Psychological Association, studies on post-pandemic youth loneliness suggest chatbots are primed to entice a large population of vulnerable minors. In a December letter to the Federal Trade Commission, the association wrote: "(It) is not surprising that many Americans, including our youngest and most vulnerable, are seeking social connection with some turning to AI chatbots to fill that need." Youth advocacy groups also say chatbots take advantage of lonely children looking for friendship. "A lot of the harm comes from the immersive experience where users keep getting pulled back in," said Amina Fazlullah, head of tech policy advocacy at Common Sense Media, which provides entertainment and tech recommendations for families. "That's particularly difficult for a child who might forget that they're speaking to technology." Bipartisan support Youth advocacy groups hope to capitalize on bipartisan support to lobby for chatbot regulations. In July, the U.S. Senate in a rare bipartisan 91-3 vote passed a federal social media bill known as the Kids Online Safety Act (KOSA). The bill would in part disable addictive platform features for minors, ban targeted advertising to minors and data collection without their consent and give parents and children an option to delete their information from social media platforms. The bill failed in the House of Representatives, where members raised privacy and free speech concerns, although Sen. Richard Blumenthal, a Connecticut Democrat, has said he plans to reintroduce it. On Feb. 5, the Senate Commerce Committee approved the Kids Off Social Media Act that would ban users under 13 from many online platforms. Despite Silicon Valley's anti-regulatory influence on the Trump administration, experts say they see an appetite for stronger laws that protect children online. "There was quite a bit of bipartisan support for KOSA or other social media addiction regulation, and it seems like this could go down that same path," said Fazlullah. To regulate AI companions, youth advocacy group Fairplay has proposed expanding the KOSA legislation, as the original bill only covered chatbots operated by major platforms and was unlikely to apply to smaller services like "We know that kids get addicted to these chatbots, and KOSA has a duty of care to prevent compulsive usage," said Josh Golin, executive director of Fairplay. The Young People's is also pushing for the U.S. Food and Drug Administration to classify chatbots offering therapy services as Class II medical devices, which would subject them to safety and effectiveness standards. However, some lawmakers have expressed concern that cracking down on AI could stifle innovation. California Gov. Gavin Newsom recently vetoed a bill that would have broadly regulated how AI is developed and deployed. Conversely, New York Gov. Kathy Hochul announced plans in January for legislation requiring AI companies to remind users that they are talking to chatbots. In the U.S. Congress, the House Artificial Intelligence Task Force published a report in December recommending modest regulations to address issues like deceptive AI-generated images but warning against government overreach. The report did not specify companion chatbots and mental health. The principle of free speech may frustrate regulation efforts, experts note. In the Florida lawsuit, is arguing the First Amendment protects speech generated by chatbots. "Everything is going to run into roadblocks because of our absolutist view of free speech," said Smithing. "We see this as an opportunity to reframe how we utilize the First Amendment to protect tech companies," she added.
Yahoo
07-02-2025
- Yahoo
British families sue TikTok over ‘blackout challenge' child deaths
TikTok is being sued by the families of four British children who died during a 'blackout challenge' craze that went viral on social media in 2022. The families of Archie Battersbee, Isaac Kenevan, Julian 'Jools' Sweeney and Maia Walsh have taken legal action against the technology giant, which is owned by China's Bytedance, in a wrongful death lawsuit filed in the US. It is believed to be the first time British parents have sued TikTok in this manner. The children, aged between 12 and 14, all died after passing out. They are believed to have suffered fatal injuries while copying a so-called 'blackout challenge'. Matthew Bergman, a lawyer at the Social Media Victims Law Center, which is representing the families, said: 'TikTok's algorithm purposely targeted these children with dangerous content to increase their engagement time on the platform and drive revenue.' Lawyers for the families claimed TikTok was a 'dangerous and addictive product that markets itself as fun and safe for children, while lulling parents into a false sense of security'. TikTok has banned blackout challenge videos since 2020 and blocks searches or hashtags related to the videos. It also bars other dangerous pranks from its app. The company declined to comment. Archie, 12, from Essex, died in 2022 after he was found non-responsive by his mother on April 7. He was taken off life support in August that year after a legal battle by his mother to keep him alive. She did not know he was using TikTok at the time of the incident. She believes he had attempted the blackout challenge, although a coroner did not find any evidence he was copying something he had seen online. His death was ruled a 'prank or experiment' gone wrong. Isaac, 13, died in March 2022 at his home in Essex. Lawyers for the family said his parents had believed the app was 'a fun, silly, and safe platform designed for kids and young people'. They later found videos on his phone in which he attempted to pass out. Julian Sweeney, 14, from Cheltenham, died on April 13, 2022. Her family has campaigned for access to her child's data in what has been dubbed 'Jools' Law'. Maia, 13, also from Essex, started using social media under her father's supervision, but lawyers for the family say she 'quickly became hooked on TikTok and began having trouble sleeping'. A police investigation into her death in October 2022 is ongoing. Her father says she was targeted with dangerous challenge and self-harm videos in the days leading to her death. The lawsuit, filed in Delaware, alleges the deaths of the children were the 'foreseeable result of ByteDance's engineered addiction-by-design and programming decisions'. It alleges they were bombarded with an 'endless stream of harms'. It said these were 'not harms the children searched for or wanted to see when their use of TikTok began'. The legal claim is thought to be the first time British families have sued TikTok through the US courts over the death of a child. TikTok, Meta, Snapchat and other social media companies have been hit by hundreds of legal claims by US families and schools, alleging their products are defective and cause harm to children. The companies are fighting the cases. TikTok narrowly avoided being blocked in the US in January, after President Donald Trump granted it a reprieve on a law that would have barred the app from US smartphone stores over national security concerns due to its China links. The company has always denied posing a security risk. In 2021, TikTok strengthened its rules around online challenges to automatically detect and block more potentially dangerous content. In its online rules, TikTok says: 'The majority [of challenges] are fun and safe, but some promote harmful behaviours including the risk of serious injury. Our Community Guidelines prohibit dangerous challenges.' While most online 'challenge' videos which go viral on social media are mundane or silly, a study commissioned by TikTok found around one in 50 teenagers had taken part in a 'dangerous' online challenge and around one in 300 had taken part in a 'really dangerous' challenge. There have also previously been hoax challenges, which have been picked up by the media and seen false claims spread online. Sign in to access your portfolio
Yahoo
07-02-2025
- Yahoo
British families sue TikTok over ‘blackout challenge' child deaths
TikTok is being sued by the families of four British children who died during a 'blackout challenge' craze that went viral on social media in 2022. The families of Archie Battersbee, Isaac Kenevan, Julian 'Jools' Sweeney and Maia Walsh have taken legal action against the technology giant, which is owned by China's Bytedance, in a wrongful death lawsuit filed in the US. It is believed to be the first time British parents have sued TikTok in this manner. The children, aged between 12 and 14, all died after passing out. They are believed to have suffered fatal injuries while copying a so-called 'blackout challenge'. Matthew Bergman, a lawyer at the Social Media Victims Law Center, which is representing the families, said: 'TikTok's algorithm purposely targeted these children with dangerous content to increase their engagement time on the platform and drive revenue.' Lawyers for the families claimed TikTok was a 'dangerous and addictive product that markets itself as fun and safe for children, while lulling parents into a false sense of security'. TikTok has banned blackout challenge videos since 2020 and blocks searches or hashtags related to the videos. It also bars other dangerous pranks from its app. The company declined to comment. Archie, 12, from Essex, died in 2022 after he was found non-responsive by his mother on April 7. He was taken off life support in August that year after a legal battle by his mother to keep him alive. She did not know he was using TikTok at the time of the incident. She believes he had attempted the blackout challenge, although a coroner did not find any evidence he was copying something he had seen online. His death was ruled a 'prank or experiment' gone wrong. Isaac, 13, died in March 2022 at his home in Essex. Lawyers for the family said his parents had believed the app was 'a fun, silly, and safe platform designed for kids and young people'. They later found videos on his phone in which he attempted to pass out. Julian Sweeney, 14, from Cheltenham, died on April 13, 2022. Her family has campaigned for access to her child's data in what has been dubbed 'Jools' Law'. Maia, 13, also from Essex, started using social media under her father's supervision, but lawyers for the family say she 'quickly became hooked on TikTok and began having trouble sleeping'. A police investigation into her death in October 2022 is ongoing. Her father says she was targeted with dangerous challenge and self-harm videos in the days leading to her death. The lawsuit, filed in Delaware, alleges the deaths of the children were the 'foreseeable result of ByteDance's engineered addiction-by-design and programming decisions'. It alleges they were bombarded with an 'endless stream of harms'. It said these were 'not harms the children searched for or wanted to see when their use of TikTok began'. The legal claim is thought to be the first time British families have sued TikTok through the US courts over the death of a child. TikTok, Meta, Snapchat and other social media companies have been hit by hundreds of legal claims by US families and schools, alleging their products are defective and cause harm to children. The companies are fighting the cases. TikTok narrowly avoided being blocked in the US in January, after President Donald Trump granted it a reprieve on a law that would have barred the app from US smartphone stores over national security concerns due to its China links. The company has always denied posing a security risk. In 2021, TikTok strengthened its rules around online challenges to automatically detect and block more potentially dangerous content. In its online rules, TikTok says: 'The majority [of challenges] are fun and safe, but some promote harmful behaviours including the risk of serious injury. Our Community Guidelines prohibit dangerous challenges.' While most online 'challenge' videos which go viral on social media are mundane or silly, a study commissioned by TikTok found around one in 50 teenagers had taken part in a 'dangerous' online challenge and around one in 300 had taken part in a 'really dangerous' challenge. There have also previously been hoax challenges, which have been picked up by the media and seen false claims spread online. Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more.