Bug That Showed Violent Content in Instagram Feeds Is Fixed, Meta Says
Meta, the parent company of Instagram, apologized on Thursday for the violent, graphic content some users saw on their Instagram Reels feeds. Meta attributed the problem to an error the company says has been addressed.
"We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended," a Meta spokesperson said in a statement provided to CNET. "We apologize for the mistake."
Meta went on to say that the incident was an error unrelated to any content-policy changes the company has made. At the start of the year, Instagram made some significant changes to its user and content-creation policies, but these changes didn't specifically address content filtering or inappropriate content appearing on feeds.
Meta made its own content-moderation changes more recently and has dismantled its fact-checking department in favor of community-driven moderation. Amnesty International warned earlier this month that Meta's changes could raise the risk of fueling violence.
Read more: Instagram May Spin Off Reels As a Standalone App, Report Says
Meta says that most graphic or disturbing imagery it flags is removed and replaced with a warning label users must click through to view the imagery. Some content, Meta says, is also filtered for those younger than 18. The company says it develops its policies around violent and graphic imagery with the help of international experts and that refining those policies is an ongoing process.
Users posted on social media and on message boards, including Reddit, about some of the unwanted imagery they saw on Instagram, presumably due to the glitch. They included shootings, beheadings, people being struck by vehicles, and other violent acts.
Brooke Erin Duffy, a social media researcher and associate professor at Cornell University, said she's unconvinced by Meta's claims that the violent-content issue was unrelated to policy changes.
"Content moderation systems -- whether powered by AI or human labor -- are never failsafe," Duffy told CNET. "And while many speculated that Meta's moderation overhaul (announced last month) would create heightened risks and vulnerabilities, yesterday's 'glitch' provided firsthand evidence of the costs of a less-restrained platform."
Duffy added that while moderating social-media platforms is difficult, "platforms moderation guidelines have served as safety mechanisms for users, especially those from marginalized communities. Meta's replacement of its existing system with a 'community notes' feature represents a step backward in terms of user protection."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


TechCrunch
27 minutes ago
- TechCrunch
Meta sues AI ‘nudify' app Crush AI for advertising on its platforms
Meta has sued the maker of a popular AI 'nudify' app, Crush AI, that reportedly ran thousands of ads across Meta's platforms. In addition to the lawsuit, Meta says it's taking new measures to crack down on other apps like Crush AI. In a lawsuit filed in Hong Kong, Meta alleged Joy Timeline HK, the entity behind Crush AI, attempted to circumvent the company's review process to distribute ads for AI nudify services. Meta said in a blog post that it repeatedly removed ads by the entity for violating its policies, but claims Joy Timeline HK continued to place additional ads anyway. Crush AI, which uses generative AI to make fake, sexually explicit images of real people without their consent, reportedly ran more than 8,000 ads for its 'AI undresser' services on Meta's platform in the first two weeks of 2025, according to the author of the Faked Up newsletter, Alexios Mantzarlis. In a January report, Mantzarlis claimed that Crush AI's websites received roughly 90% of their traffic from either Facebook or Instagram, and that he flagged several of these websites to Meta. Crush AI reportedly evaded Meta's ad review processes by setting up dozens of advertiser accounts and frequently changed domain names. Many of Crush AI's advertiser accounts, according to Mantzarlis, were named 'Eraser Annyone's Clothes' followed by different numbers. At one point, Crush AI even had a Facebook page promoting its service. Facebook and Instagram are hardly the only platforms dealing with such challenges. As social media companies like X and Meta race to add generative AI to their apps, they've also struggled to moderate how AI tools can make their platforms unsafe for users, particularly minors. Researchers have found that links to AI undressing apps soared in 2024 on platforms like X and Reddit, and on YouTube, millions of people were reportedly served ads for such apps. In response to this growing problem, Meta and TikTok have banned keyword searches for AI nudify apps, but getting these services off their platforms entirely has proven challenging. In a blog post, Meta said it has developed new technology to specifically identify ads for AI nudify or undressing services 'even when the ads themselves don't include nudity.' The company said it is now using matching technology to help find and remove copycat ads more quickly, and has expanded the list of terms, phrases and emoji that are flagged by its systems. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Meta said it is also applying the tactics it has traditionally used to disrupt networks of bad actors to these new networks of accounts running ads for AI nudify services. Since the start of 2025, Meta said, it has disrupted four separate networks promoting these services. Outside of its apps, the company said it will begin sharing information about AI nudify apps through Tech Coalition's Lantern program, a collective effort between Google, Meta, Snap and other companies to prevent child sexual exploitation online. Meta says it has provided more than 3,800 unique URLs with this network since March. On the legislative front, Meta said it would 'continue to support legislation that empowers parents to oversee and approve their teens' app downloads.' The company previously supported the US Take It Down Act, and said it's now working with lawmakers to implement it.


Fast Company
an hour ago
- Fast Company
Thanks to AI, the one-person unicorn is closer than you think
When Mike Krieger helped launch Instagram in 2010 as a cofounder, building something as simple as a photo filter took his team weeks of engineering time and tough trade-offs. Now, as chief product officer at Anthropic, he's watching early-stage startup founders accomplish far more in far less time—sometimes over a single weekend. Thanks to intuitive agentic AI models (or AI agents), founders are experimenting with product, code, and business strategies, often without needing to hire specialized team members. 'When I think back to Instagram's early days, our famously small team had to make painful decisions—either explore adding video or focus on our core creativity,' Krieger tells Fast Company. 'With AI agents, startups can now run experiments in parallel and build products faster than ever before.' To him, it signals a seismic shift: the rise of agentic entrepreneurship. Enterprises can supercharge engineering teams while individuals with bold ideas but no technical background can finally bring their visions to life. 'At Anthropic, 90% of Claude's code is now written by AI, and this has completely transformed how we build products. Recently, Claude helped me prototype something in 25 minutes that would have taken me six hours,' Krieger says. 'I see founders who tried every model, couldn't get their startup to work, then with Claude, their startup suddenly works.' Krieger believes agentic AI is fundamentally redefining what it means to be a founder. You no longer need to write code or raise significant capital to start building. The bottlenecks, he says, have shifted to decision-making and operational friction—like managing merge queues. And the numbers support this momentum. In its first week of launch, Claude 4 reportedly tripled Anthropic's subscriber base and now accounts for more than 60% of the company's API traffic. Usage of Claude Code, its specialized AI coding agent, has spiked nearly 40%, drawing interest from both developers and nontechnical builders. Krieger shared that some users have even begun treating AI agents less like tools and more like capable creative collaborators. 'AI models can now function like an entry-level worker, and that is going to have a big impact on the workforce. We think we need to talk about this so we can prepare our economy and our society for this change, which is happening very fast,' he says. 'It's too late to stop the train—but we can steer it in the right direction.' A few weeks ago, Anthropic CEO Dario Amodei predicted that 'the first one-employee billion-dollar company' could emerge as soon as 2026, enabled by AI. He also suggested AI could eliminate half of all entry-level jobs within the next five years—a claim that drew immediate pushback from some in the tech industry. Among the skeptics was Google CEO Sundar Pichai, who cautioned against overestimating the reliability of AI systems like Gemini. 'Even the best models still make basic mistakes,' Pichai said during the recent Bloomberg Tech Summit in San Francisco. 'Are we currently on an absolute path to AGI? I don't think anyone can say for sure.' On the prospect of AI displacing the workforce in the near future, Pichai remained measured. 'We've made predictions like that for the last 20 years about technology and automation,' he said. 'And it hasn't quite played out that way.' Yet even amid skepticism, a quieter revolution is unfolding beneath the surface of agentic AI—one that's reshaping how work itself is defined in the era of intelligent software collaborators. MCP: The Infrastructure That Makes AI 'Work' The unsung hero behind Anthropic and Claude's leap in capability isn't just the model itself—it's the Model Context Protocol (MCP). While Claude 4 is praised for its intelligence and natural language fluency, MCP is the system-level breakthrough that enables it to move from passive assistant to active collaborator. This open standard allows Claude's AI agents to securely interface with tools like GitHub, Stripe, Webflow, Notion, and even custom internal systems. As a result, Claude isn't limited to answering prompts. It can pull real-time analytics, trigger actions, update databases, launch web assets, and manage entire project pipelines. Just as http enabled browsers to interact with websites, MCP is creating a universal interface layer for AI agents to operate across digital tools. 'Previously, AI agents were largely isolated—they could process information you gave them, but they couldn't directly interact with your actual tools and systems,' Krieger says. 'By solving the connection problem together, we're building infrastructure that will unlock entirely new possibilities for human-AI collaboration, making AI systems dramatically more useful and relevant in real-world contexts.' Major tech companies are already integrating MCP. Microsoft has built it into Windows 11, Azure, and GitHub, allowing AI agents to run workflows across OS and cloud infrastructure. Google has added it to Gemini SDKs to bridge model interactions with live apps. Companies like Novo Nordisk, GitLab, Lyft, and Intercom are also deploying Claude agents into live workflows. In this light, Amodei's 'one-person unicorn' prediction seems less like hype and more like a reflection of a deeper platform shift. 'As developers build new connections between knowledge bases, development environments, and AI assistants, we're seeing the early emergence of the more connected AI ecosystem we envisioned,' Krieger says. 'As AI assistants become more agentic, MCP will evolve to support increasingly sophisticated workflows. [MCP] might be the most important thing Anthropic has ever shipped.' Agentic AI Is Redefining the Modern Startup Tech Stack Krieger sees the combination of Claude 4 and MCP as a genuine platform shift—one where the AI acts like a partner rather than just a productivity tool. He describes Claude Opus 4 as Anthropic's most powerful agentic model yet and the world's best coding model. '[Opus 4] can work autonomously for nearly seven hours, which transforms how teams approach work. When I can prototype something in minutes, that fundamentally changes what's possible for a single person,' Krieger says. 'In my experience, it mirrors how people manage their work. That level of autonomous task execution just wasn't possible before.' With MCP in play, Claude becomes more than an assistant. It can push code, analyze logs, manage documentation, and send updates—without the constant context switching that slows teams down. In some cases, Krieger says, it simulates workflows that once required coordination across multiple departments. 'When you can iterate at speed, every manual process, every unnecessary meeting becomes this jarring interruption,' he noted. Still, not everyone is convinced that AI-powered unicorns are imminent. Analysts caution that while AI agents can automate many workflows, they can't yet match the experience seasoned professionals bring. 'The state of LLM-based AI agents is that you must give them simple decisions to make to reliable answers. We are not close to being able to throw a bunch of data at an AI agent and trust its decision,' Tom Coshow, a senior director analyst at Gartner, tells Fast Company. 'Is there an automatic VP of sales ready to go? Not even close.' Coshow emphasizes the need for realistic expectations. 'It's important to get real about what you can and can't build,' he says. 'No-code design is incredibly powerful, but it also creates this illusion that anything you type into the box will just magically work. It doesn't.' Building robust AI agents for real-world business use, he explains, is far from trivial, noting, 'Complex agents are hard to get right. LLMs are inherently probabilistic, and most business processes simply can't rely on that kind of unpredictability.' A Brave New Startup Era? Anthropic's core bet reflects its broader philosophy: We're moving toward a world where major chunks of work are automated. 'It's better to be aware of the risk and adjust to the change than to take the chance and be caught unprepared,' Krieger says. 'We're seeing this shift begin with tech companies, but it's going to move quickly into other knowledge-intensive industries.' So, is the one-person unicorn just hype—or a sign of things to come? It may still be too early to know. For experts like Coshow, the future lies not in abrupt disruption, but in careful evolution. 'The path forward is well-designed agentic workflows with a human in the loop,' he says. Whether or not a billion-dollar solo startup emerges by 2026, the tools to build one are already here. And that, as Krieger sees it, changes everything. 'It's going to be about finding people who can work at the intersection of customer problems and AI capabilities,' he says. 'The most valuable early hire might not be a traditional engineer—it could be someone who translates needs into iterative, AI-powered solutions. The one-person unicorn will be relentlessly curious, and fluent in working with intelligent collaborators.'

Engadget
an hour ago
- Engadget
Meta is cracking down on AI 'nudify' apps
Meta is finally cracking down on "nudify" apps that use AI to generate nonconsensual nude and explicit images of celebrities, influencers and others. The company is suing one app maker that's frequently advertised such apps on Facebook and Instagram, and taking new steps to prevent ads for similar services. The crackdown comes months after several researchers and journalists have raised the alarm about such apps. A recent report from CBS News identified at least "hundreds" of ads on Meta's platform promoting apps that allow users to "remove clothing" from images of celebrities and others. One app in particular, called Crush AI, has apparently been a prolific advertiser on Facebook and Instagram. Researcher Alexios Mantzarlis, Director of Cornell Tech's Security, Trust and Safety Initiative, reported back in January that Crush AI had run more than 8,000 ads on Facebook and Instagram since last fall. Now, Meta says it has filed a lawsuit against Joy Timeline HK Limited, the Hong Kong-based company behind Crush AI and other nudify apps. "This follows multiple attempts by Joy Timeline HK Limited to circumvent Meta's ad review process and continue placing these ads, after they were repeatedly removed for breaking our rules," the company wrote in a blog post. Joy Timeline HK Limited didn't immediately respond to a request for comment. Meta also says it's taking new steps to prevent apps like these from advertising on its platform. "We've developed new technology specifically designed to identify these types of ads — even when the ads themselves don't include nudity — and use matching technology to help us find and remove copycat ads more quickly," Meta wrote. "We've worked with external experts and our own specialist teams to expand the list of safety-related terms, phrases and emojis that our systems are trained to detect within these ads." The social network says it also plans to work with other tech platforms, including app store owners, to share relevant details about entities that abuse its platform. Nudify apps aren't the only entities that have exploited Meta's advertising platform to run ads featuring celebrity deepfakes. Meta has also struggled to contain shady advertisers that use AI-manipulated video of public figures to promote scams . The company's independent Oversight Board, which weighs in on content moderation issues affecting Facebook and Instagram, recently criticized Meta for under-enforcing its rules prohibiting such ads.