
Shareholders to Demand Action from Mark Zuckerberg and Meta on Child Safety
'Two weeks ago, I stood outside of Meta's office in NYC with bereaved parents whose children died as a result of sextortion, cyberbullying, and drug purchases on Meta's platforms and demanded stronger protections for kids," said Sarah Gardner, CEO of the Heat Initiative. 'Meta's most recent 'solution' is a bandaid. They promised parents that Instagram Teens would protect their kids from harm. In reality, it still recommends sexual, racist, and violent content on their feeds. We are asking shareholders to hold Mark Zuckerberg and Meta accountable and demand greater transparency about why child safety is still lagging.'
'Meta algorithms designed to maximize user engagement have helped build online abuser networks, normalize cyberbullying, enable the exponential growth of child sexual abuse materials, and flood young users with addictive content that damages their mental health,' said Michael Passoff, CEO of Proxy Impact. 'And now, a major child safety concern is Meta's doubling down on AI despite the unique threats it poses to young users. Just this year, the National Center for Missing and Exploited Children saw 67,000 reports of suspected child sexual exploitation involving Generative AI, a 1,325% increase from 2023. Meta's continued failure to address these issues poses significant regulatory, legal, and reputational risk in addition to innumerable young lives.'
The resolution asks the Meta Board of Directors to publish 'a report that includes targets and quantitative metrics appropriate to assessing whether and how Meta has improved its performance globally regarding child safety impacts and actual harm reduction to children on its platforms.' Additional information for shareholders was filed with the SEC.
Meta has been under pressure for years linked to online child safety risks, including:
41 States and the District of Columbia Attorney's General filing lawsuits alleging that Meta Platforms has intentionally built programs with addictive features that harm young users.
1 out of 8 eight kids under 16 reported experiencing unwanted sexual advances on Instagram in the last 7 days according to Meta's internal research.
A leading psychologist resigned from her position on Meta's SSI expert panel on suicide prevention and self harm, alleging Meta is willfully neglecting harmful content, disregarding expert recommendations, and prioritizing financial gain.
As many as 100,000 children were sexually harassed daily on Meta platforms in 2021. Meta took no action until they were called for Senate testimony 3 years later.
Internal research leaked by Meta whistleblower Frances Haugen showed that the company is aware of many harms including Instagram's toxic risks to teenage girls mental health including thoughts of suicide and eating disorders.
Since 2019, Proxy Impact and Dr. Cooper have worked with members of the Interfaith Center on Corporate Responsibility, pension funds, foundations, and asset managers to empower investors to utilize their leverage to encourage Meta and other tech companies to strengthen child safety measures on social media.
Proxy Impact provides shareholder engagement and proxy voting services that promote sustainable and responsible business practices. For more information, visit www.proxyimpact.com.
Heat Initiative works to hold the world's most valuable and powerful tech companies accountable for failing to protect kids from online child sexual exploitation. Heat Initiative sees a future where children's safety is at the forefront of any existing and future technological developments.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
8 minutes ago
- Yahoo
Anthropic's Claude AI model can now handle longer prompts
Anthropic is increasing the amount of information that enterprise customers can send to Claude in a single prompt, part of an effort to attract more developers to the company's popular AI coding models. For Anthropic's API customers, the company's Claude Sonnet 4 AI model now has a one million token context window — meaning the AI can handle requests as long as 750,000 words, more than the entire Lord of the Rings trilogy, or 75,000 lines of code. That's roughly five times Claude's previous limit (200,000 tokens), and more than double the 400,000 token context window offered by OpenAI's GPT-5. Long context will also be available for Claude Sonnet 4 through Anthropic's cloud partners, including on Amazon Bedrock and Google Cloud's Vertex AI. Anthropic has built one of the largest enterprise businesses among AI model developers, largely by selling Claude to AI coding platforms such as Microsoft's GitHub Copilot, Windsurf, and Anysphere's Cursor. While Claude has become the model of choice among developers, GPT-5 may threaten Anthropic's dominance with its competitive pricing and strong coding performance. Anysphere CEO Michael Truell even helped OpenAI announce the launch of GPT-5, which is now the default AI model for new users in Cursor. Anthropic's product lead for the Claude platform, Brad Abrams, told TechCrunch in an interview that he expects AI coding platforms to get a 'lot of benefit' from this update. When asked if GPT-5 put a dent in Claude's API usage, Abrams downplayed the concern, saying he's 'really happy with the API business and the way it's been growing.' Whereas OpenAI generates most of its revenue from consumer subscriptions to ChatGPT, Anthropic's business centers around selling AI models to enterprises through an API. That's made AI coding platforms a key customer for Anthropic, and could be why the company is throwing in some new perks to attract users in the face of GPT-5. Last week, Anthropic unveiled an updated version of its largest AI model, Claude Opus 4.1, which pushed the company's AI coding capabilities a bit further. Generally speaking, AI models tend to perform better on all tasks when they have more context, but especially for software engineering problems. For example, if you ask an AI model to spin up a new feature for your app, it's likely to do a better job if it can see the entire project, rather than just a small section. Abrams also told TechCrunch that Claude's large context window also helps it perform better at long agentic coding tasks, in which the AI model is autonomously working on a problem for minutes or hours. With a large context window, Claude can remember all its previous steps in long-horizon tasks. But some companies have taken large context windows to an extreme, claiming their AI models can process massive prompts. Google offers a 2 million token context window for Gemini 2.5 Pro, and Meta offers a 10 million token context window for Llama 4 Scout. Some studies suggest there's a limit to how large context windows can be, and AI models are not great at processing massive prompts. Abrams said that Anthropic's research team focused on increasing not just the context window for Claude, but the 'effective context window,' suggesting that its AI can understand most of the information it's given. However, he declined to reveal Anthropic's exact techniques. When prompts to Claude Sonnet 4 are over 200,000 tokens, Anthropic will charge more to API users, at $6 per million input tokens and $22.50 per million output tokens (up from $3 per million input tokens and $15 per million output tokens).


Business Wire
9 minutes ago
- Business Wire
Experts from Witherite Law Group say Autonomous Trucks are Not Ready for Texas Roads
DALLAS--(BUSINESS WIRE)--Attorney and truck safety advocate Amy Witherite warns that autonomous trucks still face serious safety gaps—acknowledged by their own developers, confirmed by independent studies, and underscored by industry experts. Waabi CEO Raquel Urtasun has called her company's simulator-based approach 'provably safe,' saying real-world testing in the millions of miles 'is nowhere near what would be required to provide the rigorous evidence necessary for a comprehensive safety case.' Professor Philip Koopman of Carnegie Mellon University, one of the world's leading autonomous vehicle safety researchers, cautions that true safety requires ultra-reliability: 'Safety isn't about working right most of the time. Safety is all about the rare case where it doesn't work properly. It has to work 99.999999999% of the time. AV companies are still working on the first few nines, with a bunch more nines to go.' Witherite says those two statements highlight the gap between marketing promises and operational reality: 'Even the most advanced companies admit they're far from testing at the scale needed to prove safety under real-world conditions. Experts are telling us this technology is still working out the basics—so putting it on Texas highways is reckless.' This comes as Aurora Innovation begins nighttime runs of its self-driving trucks on the Dallas–Houston route—still with a human observer in the cab 'though no manual intervention is required'—and Texas A&M Transportation Institute warns that AI-driven systems remain limited by their programming, sensor range, and narrowly defined operational design domains. Meanwhile, FMCSA's 2023 Pocket Guide to Large Truck and Bus Statistics shows Texas is not only the deadliest state for large truck crashes in raw numbers—with 821 fatalities in 2021—but also has a per-mile fatality rate of 0.29 per 100 million vehicle miles traveled, well above the U.S. average of 0.19. While a few states have even higher per-mile rates, Texas still ranks in the higher-risk tier nationally and far exceeds states like California despite having a smaller population. In 2023 alone, Texas recorded 650 deadly large-truck crashes—52% more than California, the next highest state. 'Texas can't afford to be the test track for unproven technology,' Witherite said. 'We already have the highest truck crash fatality numbers in the country and a safety rate worse than the national average. Until autonomous trucks can meet the extreme reliability experts demand, they have no place in live traffic.' Amy Witherite is the founding attorney of Witherite Law Group and a nationally recognized traffic safety advocate. She has represented hundreds of families affected by trucking collisions. Call 1 800 Truck Wreck or visit to learn more.


Business Wire
9 minutes ago
- Business Wire
KBRA Assigns Preliminary Ratings to Angel Oak Mortgage Trust 2025-9 (AOMT 2025-9)
NEW YORK--(BUSINESS WIRE)--KBRA assigns preliminary ratings to eight classes of mortgage-backed certificates from Angel Oak Mortgage Trust 2025-9 (AOMT 2025-9), a $288.7 million non-prime RMBS transaction. The underlying collateral, comprised of 567 residential mortgages, is characterized by a significant concentration of loans underwritten using alternative income documentation. All the loans are either classified as non-qualified mortgages (52.6%) or exempt (47.4%) from the Ability-to-Repay/Qualified Mortgage rule due to being originated for non-consumer loan purposes. Angel Oak Mortgage Solutions originated 60.1% of the pool, with no other originator comprising over 10% of the collateral. KBRA's rating approach incorporated loan-level analysis of the mortgage pool through its Residential Asset Loss Model (REALM), an examination of the results from third-party loan file due diligence, cash flow modeling analysis of the transaction's payment structure, reviews of key transaction parties and an assessment of the transaction's legal structure and documentation. This analysis is further described in our U.S. RMBS Rating Methodology. To access ratings and relevant documents, click here. Click here to view the report. Related Publications Methodologies Disclosures Further information on key credit considerations, sensitivity analyses that consider what factors can affect these credit ratings and how they could lead to an upgrade or a downgrade, and ESG factors (where they are a key driver behind the change to the credit rating or rating outlook) can be found in the full rating report referenced above. A description of all substantially material sources that were used to prepare the credit rating and information on the methodology(ies) (inclusive of any material models and sensitivity analyses of the relevant key rating assumptions, as applicable) used in determining the credit rating is available in the Information Disclosure Form(s) located here. Information on the meaning of each rating category can be located here. Further disclosures relating to this rating action are available in the Information Disclosure Form(s) referenced above. Additional information regarding KBRA policies, methodologies, rating scales and disclosures are available at About KBRA Kroll Bond Rating Agency, LLC (KBRA), one of the major credit rating agencies (CRA), is a full-service CRA registered with the U.S. Securities and Exchange Commission as an NRSRO. Kroll Bond Rating Agency Europe Limited is registered as a CRA with the European Securities and Markets Authority. Kroll Bond Rating Agency UK Limited is registered as a CRA with the UK Financial Conduct Authority. In addition, KBRA is designated as a Designated Rating Organization (DRO) by the Ontario Securities Commission for issuers of asset-backed securities to file a short form prospectus or shelf prospectus. KBRA is also recognized as a Qualified Rating Agency by Taiwan's Financial Supervisory Commission and is recognized by the National Association of Insurance Commissioners as a Credit Rating Provider (CRP) in the U.S. Doc ID: 1010769