logo
#

Latest news with #Section230

Jolt's Latest Doc ‘Can't Look Away' Examines the Dark Side of Social Media and Its Impact On Adolescents
Jolt's Latest Doc ‘Can't Look Away' Examines the Dark Side of Social Media and Its Impact On Adolescents

Yahoo

time6 days ago

  • Health
  • Yahoo

Jolt's Latest Doc ‘Can't Look Away' Examines the Dark Side of Social Media and Its Impact On Adolescents

In the documentary 'Can't Look Away,' directors Matthew O'Neill and Perri Peltz expose the dark side of social media and the tragic impact Big Tech company algorithms can have on children and teens. Based on extensive investigative reporting by Bloomberg News reporter Olivia Carville, the doc follows a team of lawyers at Seattle's Social Media Victims Law Center who are battling several tech companies for families who have lost children due to suicide, drug overdose, or exploitation linked to social media use. O'Neill and Peltz ('Axios,' 'Surveilled') capture the lawyers' fight against Section 230 of the Federal Communications Act. Created in 1996 before the birth of social media, Section 230, known as the Communications Decency Act, states that internet service providers cannot be held responsible for what third parties are doing. More from Variety Netflix's 'Cold Case: The Tylenol Murders' Investigates Who Was Responsible for Seven Deaths: A Psychopath or a Drug Company? Judas Priest Documentary, Co-Directed by Rage Against the Machine's Tom Morello, Coming From Sony Music Vision (EXCLUSIVE) Millennium Docs Against Gravity Festival in Poland Crowns 'Yintah' With Grand Prize 'The fact that this group of really incredible lawyers came together with this mission in mind to get around Section 230 through product liability, we just thought it was such a fascinating approach,' says Peltz. 'Can't Look Away' is currently streaming on Jolt, an AI-driven streaming platform that connects independent films with audiences. Recent Jolt titles include 'Hollywoodgate,' 'Zurawsksi v Texas,' and 'The Bibi Files,' a documentary from Oscar-winners Alex Gibney and Alexis Bloom that investigates corruption in Israeli politics. O'Neill says that he and Petlz decided to put 'Can't Look Away' on Jolt, in part, because the company could 'move quickly and decisively reach an audience now, with a message that audiences are hungry for.' 'What was also appealing to us is this sense of Jolt as a technology company,' he says. 'They are using these tools to identify and draw in new audiences that might not be the quote unquote documentary audience. We are documentary filmmakers, and we want our films to speak to everyone.' Jolt uses AI to power its Interest Delivery Networks, enabling films to connect with their target audiences. The platform's Chief Executive Officer, Tara Hein-Phillip, would not disclose Jolt viewership numbers for 'Can't Look Away,' making it difficult to determine how well the new distribution service is performing. However, Hein-Phillip did reveal that since the platform's launch in March 2024, the company's most-viewed film is the documentary 'Your Fat Friend,' which charts the rise of writer, activist, and influencer Aubrey Gordon. Hein-Phillip attributed part of the film's success on Jolt to Gordon's niche but significant online following. 'We are still learning along the way what builds audience and where to find them and how long it takes to build them,' Hein-Phillip says. 'It's slightly different for every film. We really focus on trying to find unique audiences for each individual film. In a way, that is problematic because it's not a reliable audience to say, 'Oh, we have built however many for this particular film, now we can turn them onto (this other) film and they'll all go there.' They won't.' The company utilizes advanced data analytics and machine learning to develop performance marketing plans that target specific audiences for each film and increase awareness. All collected data is shared with each respective Jolt filmmaker, who receives 70% of their Jolt earnings and retains complete ownership of their work and all future rights. 'Initially, we thought Jolt would just be an opportunity to put a film up there,' says Hein-Phillip. 'We would put some marketing against it, and we would push the film out into the world and give it our best push, and we definitely still do that, but now we realize that to build an audience, you actually have to do a handful of things. Some films come to us and they have already done that work, and some films come to us and they haven't. If they haven't, it's in our best interest and their best interest for us to help facilitate that.' That 'work' can include a theatrical release, an impact campaign, or a festival run. In addition to being a 'great, impactful film,' Hein-Phillip says that Jolt partnered with O'Neill and Peltz on 'Can't Look Away' because of the doc's audience potential. 'There are so many audiences for this film – parents, teenagers, lawyers, educators, etc,' said Hein-Philip. To attract those audiences, Jolt and 'Can't Look Away' directors have, ironically, relied on social media to help get the word out about the film. 'We aren't anti-social media,' says Petlz. 'What we are trying to say in the film is – put the responsibility where it rightly belongs.' 'Can't Look Away' will be released on Bloomberg Media Platforms in July. Best of Variety What's Coming to Netflix in June 2025 New Movies Out Now in Theaters: What to See This Week 'Harry Potter' TV Show Cast Guide: Who's Who in Hogwarts?

Section 230 Was Hijacked by Big Tech to Silence You
Section 230 Was Hijacked by Big Tech to Silence You

Yahoo

time28-05-2025

  • Politics
  • Yahoo

Section 230 Was Hijacked by Big Tech to Silence You

In 1996, Congress passed a well-meaning law called Section 230 of the Communications Decency Act to help internet platforms grow. It was supposed to protect online forums from liability for what their users said—not give billion-dollar corporations the right to shadow-ban dissidents, rig elections, and coordinate censorship with the federal government. But thanks to a judicial sleight of hand, Section 230 became the sledgehammer Big Tech used to bludgeon the First Amendment into submission. And now—at long last—the Supreme Court may have a chance to fix it. The case to watch is Fyk v. Facebook, and it might be the most important free speech lawsuit you've never heard of. So, here's The Lie That Broke the Internet: Section 230(c)(1) reads: 'No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.' Sounds simple, right? Don't sue the platform for what someone else posts. But that's not how the courts interpreted it. They swapped out 'the publisher' for 'a publisher'—a tiny grammatical switch with massive consequences. That misquote gave platforms immunity not just for hosting content—but for what they choose to manipulate, suppress, or delete. This misinterpretation has allowed Big Tech giants to: Throttle political speech they don't like; Deplatform rival voices and competitors; Shadow-ban stories that challenge official narratives, And partner with the government to suppress dissenting opinions—all while claiming immunity. Don't take my word for it—look at the receipts. The 'Twitter Files' revealed that federal agencies actively worked with platforms to suppress content. A federal judge even issued an injunction in Missouri v. Biden to stop this unconstitutional collusion. That's not moderation. That's state-sanctioned censorship in a corporate mask. Congress intended Section 230 to protect platforms acting in good faith—hence the name of Section 230(c): 'Protection for 'Good Samaritan' blocking and screening of offensive material.' Platforms were supposed to remove truly harmful content—pornography, violence, abuse—not opinions that made their investors uncomfortable or their partners in D.C. nervous. But under the courts' bastardized reading of the law, the 'good faith' clause in Section 230(c)(2) became meaningless. If 230(c)(1) shields all moderation, then what's the point of requiring platforms to act in good faith at all? That's a textbook violation of the surplusage canon—a legal rule that says no part of a statute should be rendered pointless. In short, the courts rewrote the law. And they handed Big Tech the keys to our digital public square. Jason Fyk built a multi-million-dollar business on Facebook. With over 25 million followers, his pages drove massive traffic—until Facebook targeted and deleted his content, allegedly redirecting it to competitors and killing his revenue. When he sued, Judge Jeffrey White dismissed the case under Section 230—claiming Facebook was immune. But here's the kicker: Fyk wasn't suing over what other people said. He was suing over what Facebook did. They didn't just host his content—they manipulated it, redirected it, and destroyed his business. That's not speech. That's sabotage. Fyk's verified complaint included sworn factual allegations. Under standard civil procedure (Rule 12(b)(6)), the court was required to treat those facts as true. Instead, the judge parroted Facebook's false claims—even branding Fyk the 'pee page guy' over a page he didn't even own. This kind of judicial deference to Big Tech is exactly why Fyk's case is headed to the Supreme Court. Let's clear something up: Section 230 is an affirmative defense, not 'sovereign immunity.' That means platforms must prove their actions were lawful—not automatically escape trial. In Barnes v. Yahoo! (2009), the Ninth Circuit confirmed that Section 230 is not a blanket shield. But courts have ignored that precedent and instead created a fantasy world where Big Tech can't be touched—no matter what they do. As Jason Fyk explains in his eye-opening analysis, Section 230 for Dummies, the judiciary has created 'super-immunity' out of thin air. That's not just unconstitutional—it's dangerous. The Supreme Court has a golden opportunity here. If they take Fyk's case, they can: Restore due process by ending early dismissals based on false immunity; Reinstate the 'good faith' requirement for content moderation; Clarify the difference between a neutral host and an active publisher; And return free speech to the people, not the platforms. No new laws are needed. Just correct interpretation of the law we already have. Section 230 was designed to protect speech—not suppress it. It was written to encourage good faith moderation—not corporate censorship on behalf of the federal government. The law isn't broken. The courts broke it. Now it's time they fix it.

Meta faces increasing scrutiny over widespread scam ads
Meta faces increasing scrutiny over widespread scam ads

Fox News

time22-05-2025

  • Business
  • Fox News

Meta faces increasing scrutiny over widespread scam ads

Meta, the parent company of Facebook and Instagram, is under fire after a major report revealed that thousands of fraudulent ads have been allowed to run on its platforms. According to the Wall Street Journal, Meta accounted for nearly half of all scam complaints tied to Zelle transactions at JPMorgan Chase between mid-2023 and mid-2024. Other banks have also reported a high number of fraud cases linked to Meta's platforms. The problem of scam ads on Facebook has grown rapidly in recent years. Experts point to the rise of cryptocurrency schemes, AI-generated content and organized criminal groups operating from Southeast Asia. These scams range from fake investment opportunities to misleading product offers and even the sale of nonexistent puppies. One example involves Edgar Guzman, a legitimate business owner in Atlanta, whose warehouse address was used by scammers in more than 4,400 Facebook and Instagram ads. These ads promised deep discounts on bulk merchandise, tricking people into sending money for products that never existed. "What sucks is we have to break it to people that they've been scammed. We don't even do online sales," Guzman told reporters. Meta says it's fighting back with new technology and partnerships, including facial-recognition tools and collaborations with banks and other tech companies. A spokesperson described the situation as an "epidemic of scams" and insisted that Meta is taking aggressive action, removing more than 2 million accounts linked to scam centers in several countries this year alone. However, insiders tell a different story. Current and former Meta employees say the company has been reluctant to make it harder for advertisers to buy ads, fearing it could hurt the company's bottom line. Staff reportedly tolerated between eight and 32 fraud "strikes" before banning accounts and scam enforcement was deprioritized to avoid losing ad revenue. Victims of these scams often lose hundreds or even thousands of dollars. In one case, fake ads promised free spice racks from McCormick & Co. for just a small shipping fee, only to steal credit card details and rack up fraudulent charges. Another common scam involves fake puppy sales, with victims sending deposits for pets that never arrive. Some scam operations are even linked to human trafficking, with criminal groups forcing kidnapped victims to run online fraud schemes under threat of violence. Meta maintains that it is not legally responsible for fraudulent content on its platforms, citing Section 230 of federal law, which protects tech companies from liability for user-generated content. In court filings, Meta has argued that it "does not owe a duty to users" when it comes to policing fraud. Meanwhile, a class-action lawsuit over allegedly inflated ad reach metrics is moving forward, putting even more pressure on Meta to address transparency and accountability. Staying safe online takes a little extra effort, but it's well worth it. Here are some steps you can follow to avoid falling victim to scam ads. 1. Check the source and use strong antivirus software: Look for verified pages and official websites. Scammers often copy the names and logos of trusted brands, but the web address or page details may be off. Always double-check the URL for slight misspellings or extra characters and avoid clicking links in ads if you're unsure about their legitimacy. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices. 2. Be skeptical of deals that seem too good to be true: If an ad offers products at an unbelievable price or promises huge returns, pause and investigate before clicking. Scammers often use flashy discounts or urgent language to lure people in quickly. Take a moment to think before you act, and remember that if something sounds impossible, it probably is. 3. Research the seller: Search for reviews and complaints about the company or individual. If you can't find any credible information, it's best to avoid the offer. A quick online search can reveal if others have reported scams or had bad experiences, and legitimate businesses usually have a track record you can verify. 4. Consider using a personal data removal service: There are companies that can help remove your personal info from data brokers and people-search sites. This means less of your data floating around for scammers to find and use. While these services usually charge a fee, they can save you a lot of time and hassle compared to doing it all yourself. Over time, you might notice fewer spam calls, emails and even a lower risk of identity theft. Check out my top picks for data removal services here. 5. Never share sensitive information: Don't enter your credit card or bank details on unfamiliar sites. If you're asked for personal information, double-check the legitimacy of the request. Scammers may ask for sensitive data under the guise of "verifying your identity" or processing a payment, but reputable companies will never ask for this through insecure channels. 6. Keep your devices updated: Keeping your software updated adds an extra layer of protection against the latest threats. Updates often include important security patches that fix vulnerabilities hackers might try to exploit. By regularly updating your devices, you help close those security gaps and keep your personal information safer from scammers and malware. 7. Report suspicious ads: If you see a scam ad on Facebook or Instagram, report it using the platform's tools. This helps alert others and puts pressure on Meta to take action. Reporting is quick and anonymous, and it plays a crucial role in helping platforms identify patterns and remove harmful content. 8. Monitor your accounts: Regularly check your bank and credit card statements for unauthorized transactions, especially after making online purchases. Early detection can help you limit the damage if your information is compromised, and most banks have fraud protection services that can assist you if you spot something suspicious. By following these steps, you can better protect yourself and your finances from online scams. Staying alert and informed is your best defense in today's digital world. The mess with scam ads on Meta's platforms shows why it's important to look out for yourself online. Meta says it's working on the problem, but many people think it's not moving fast enough. By staying careful, questioning suspicious offers and using good security tools, you can keep yourself safer. Until the platforms step up their game, protecting yourself is the smartest move you can make. Should Meta be doing more to protect its users from scam ads, even if it means making changes that could affect its advertising revenue? Let us know by writing us at For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Follow Kurt on his social channels: Answers to the most-asked CyberGuy questions: New from Kurt: Copyright 2025 All rights reserved.

Exclusive: Bipartisan duo revives bill to fight online child pornography
Exclusive: Bipartisan duo revives bill to fight online child pornography

Axios

time21-05-2025

  • Politics
  • Axios

Exclusive: Bipartisan duo revives bill to fight online child pornography

Sens. Josh Hawley and Dick Durbin will re-introduce legislation Wednesday that would hold tech companies accountabl e for hosting child sex abuse material. Why it matters: The STOP CSAM Act would go after the statute that shields tech companies from being liable for what is posted on their platforms. President Trump has previously shown support for getting rid of the liability shield, known as Section 230, and efforts to combat online deepfakes have the White House's attention. What's inside: The bill would allow victims to bring federal civil lawsuits against companies that intentionally, knowingly, or recklessly promote, store, or make child sex abuse material available. It also creates a new criminal provision prohibiting similar conduct. Certain child victims and witnesses in court would also get enhanced privacy protections. Threat level: The National Center for Missing and Exploited Children said that they saw an alarming increase in the use of AI to create child sexual abuse material in 2024. "Every day that Congress fails to protect kids online is another day that online predators can victimize children and steal their innocence—and social media companies are totally complicit," Hawley said in a statement. "Big Tech has woefully failed to police itself, and the American people are demanding that Congress intervene. We made significant headway last year to address Big Tech's failure to protect our kids online and it's time to build on that progress," Durbin said in a statement. Flashback: The bill passed the Senate Judiciary Committee unanimously in 2023 after Hawley and Durbin agreed to an amendment to let people sue companies.

TikTok, Meta face tough curbs in Asia even as US efforts stall
TikTok, Meta face tough curbs in Asia even as US efforts stall

Straits Times

time12-05-2025

  • Business
  • Straits Times

TikTok, Meta face tough curbs in Asia even as US efforts stall

TikTok's largest market by users is the US, but five of its 10 biggest globally are in South-east or South Asia. PHOTO: REUTERS Some of the toughest new laws attempting to rein in TikTok, Instagram and Snapchat aren't coming from Washington or Brussels. They're emerging from capitals such as Canberra, Jakarta and Kuala Lumpur. Governments across the Asia-Pacific region are leading the global charge to protect children from online harms, presenting an unprecedented challenge to the likes of ByteDance, Meta Platforms and Snap in markets with some of their largest and most youthful user bases. Australia late 2024 passed a law requiring social media platforms to keep children under the age of 16 off their services. New Zealand's governing party last week put forward a bill that mirrors Australia's move. Indonesia is formulating restrictions for those under 18 accessing social media. Malaysia is requiring social media firms to obtain licenses to operate in the country, while Singaporean policymakers have signalled they're open to minimum-age laws. Meanwhile, Vietnam is requiring foreign social platforms verify their users' accounts and provide authorities with their identities on demand, and Pakistan wants such firms to register with a new agency. 'I've met with parents who have lost and buried their child. It's devastating,' Australian Prime Minister Anthony Albanese said in November. 'We can't as a government hear those messages from parents and say it's too hard. We have a responsibility to act.' To be sure, it's unclear how strictly some of the measures will be enforced. And social media titans face headwinds elsewhere, such as the European Commission's Digital Markets and Digital Services Acts, along with moves by other nations attempting to curb children's access to the platforms. In the US, social media firms have come under fire in some states, but the federal government has yet to pass meaningful legislation requiring they establish more guardrails. The Senate in July passed the Kids Online Safety Act, which would force companies to prioritise children's wellbeing, but the measure has stalled in the House. Meta faces a landmark antitrust case by the US Federal Trade Commission, while TikTok could be banned in the country. Meanwhile one US law firm is pursuing a new legal strategy, focusing on product liability, to hold tech giants accountable for harms to children despite longstanding protections afforded by Section 230 of the Communications Decency Act. New rules in Asia-Pacific could complicate companies' operations across the region, said Mr Ewan Lusty, a Singapore-based director at political and regulatory consultancy Flint Global. 'If you've got each country implementing their own version of a regulation, then the cost of complying with that will multiply' for tech firms, he said. The emerging restrictions also pose a new threat because they could curtail the tech titans' growth in some of the world's most populous markets. South-east Asia is home to more than 650 million people, while South Asia's population stands at roughly 2 billion. Young internet users across the region are expected to play a vital role in propelling digital firms' expansion in the years to come. China has for years blocked foreign online platforms, shutting them out of a market of some 1.4 billion people. In a bid to capitalise on growth across Asia-Pacific, Alphabet's Google, Microsoft and other tech giants are investing billions of dollars in the region as young users increasingly communicate with friends online, shop, stream video and use generative AI. Social network titans don't typically break out user counts or sales by country, but they often derive most of their revenue from developed economies in the west, where advertisers pay more to reach wealthier consumers. User growth in many richer nations, though, has slowed over the years. For Meta, South-east and South Asian nations make up significant global shares of Instagram and Facebook user accounts, with those consumers tending to be younger, according to data from digital consulting firm Kepios, which specialises in analysing online behaviour. Markets across the region also have some of the world's highest rates of user engagement for Meta's products, and many citizens depend on Facebook, especially, as a gateway to the internet. Meta and other firms also often use such countries as testing grounds for new product initiatives. TikTok's largest market by users is the US, but five of its 10 biggest globally are in South-east or South Asia, according to Kepios data. Snapchat has more than twice as many users in South Asia than in the US, the data shows. Australia, which has a track record of battling big tech, in November passed its controversial law banning young children from social media beginning at the end of this year. Platforms will be responsible for enforcing the age limit, with penalties of as much as A$50 million (S$41 million) for breaches. While opinion polls have shown that many Australian voters support the new rule in principle, some of the companies, academics and children's rights groups call it flawed and question how it might be enforced. An executive at one major tech firm, asking not to be identified discussing sensitive matters, said Australia's move has resulted in consternation among companies and uncertainty over how things will proceed. A Meta spokesperson said the company is committed to keeping young people safe and that safety tools it has rolled out for such users have proven popular around the world. A spokesperson for Snap pointed to concerns that have been raised about Australia's new rules, but said the company would work with the Australian government ahead of their implementation and comply with any regulations. TikTok has in the past highlighted voluntary measures it has implemented to support safety for teens. X declined to comment. The Asia Internet Coalition, an industry group that represents major tech players in matters of tech policy in the region, didn't respond to requests for comment on the regulatory moves. Asia-Pacific policymakers in the past haven't been as quick as governments elsewhere to regulate tech firms, but that's changing now, said Lusty of Flint Global. 'The region is becoming increasingly important in debates around how we govern the digital space,' he said. Bloomberg Join ST's Telegram channel and get the latest breaking news delivered to you.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store