
Top AI Firms Fall Short on Safety, New Studies Find
The risks of even today's AI—by the admission of many top companies themselves—could include AI helping bad actors carry out cyberattacks or create bioweapons. Future AI models, top scientists worry, could escape human control altogether.
The studies were carried out by the nonprofits SaferAI and the Future of Life Institute (FLI). Each was the second of its kind, in what the groups hope will be a running series that incentivizes top AI companies to improve their practices.
'We want to make it really easy for people to see who is not just talking the talk, but who is also walking the walk,' says Max Tegmark, president of the FLI.
Read More: Some Top AI Labs Have 'Very Weak' Risk Management, Study Finds
SaferAI assessed top AI companies' risk management protocols (also known as responsible scaling policies) to score each company on its approach to identifying and mitigating AI risks.
No AI company scored better than 'weak' in SaferAI's assessment of their risk management maturity. The highest scorer was Anthropic (35%), followed by OpenAI (33%), Meta (22%), and Google DeepMind (20%). Elon Musk's xAI scored 18%.
Two companies, Anthropic and Google DeepMind, received lower scores than the first time the study was carried out, in October 2024. The result means that OpenAI has overtaken Google as second place in SaferAI's ratings.
Siméon Campos, founder of SaferAI, said Google scored comparatively low despite doing some good safety research, because the company makes few solid commitments in its policies. The company also released a frontier model earlier this year, Gemini 2.5, without sharing safety information—in what Campos called an 'egregious failure.' A spokesperson for Google DeepMind told TIME: 'We are committed to developing AI safely and securely to benefit society. AI safety measures encompass a wide spectrum of potential mitigations. These recent reports don't take into account all of Google DeepMind's AI safety efforts, nor all of the industry benchmarks. Our comprehensive approach to AI safety and security extends well beyond what's captured.'
Anthropic's score also declined since SaferAI's last survey in October. This was due in part to changes the company made to its responsible scaling policy days before the release of Claude 4 models, which saw Anthropic remove its commitments to tackle insider threats by the time it released models of that caliber. 'That's very bad process,' Campos says. Anthropic did not immediately respond to a request for comment.
The study's authors also said that its methodology had become more detailed since last October, which accounts for some of the differences in scoring.
The companies that improved their scores the most were xAI, which scored 18% compared to 0% in October; and Meta, which scored 22% compared to its previous score of 14%.
The FLI's study was broader—looking not only at risk management practices, but also companies' approaches to current harms, existential safety, governance, and information sharing. A panel of six independent experts scored each company based on a review of publicly available material such as policies, research papers, and news reports, together with additional nonpublic data that companies were given the opportunity to provide. The highest grade was scored by Anthropic (a C plus). OpenAI scored a C, and Google scored a C minus. (xAI and Meta both scored D.)
However, in FLI's scores for each company's approach to 'existential safety,' every company scored D or below. 'They're all saying: we want to build superintelligent machines that can outsmart humans in every which way, and nonetheless, they don't have a plan for how they're going to control this stuff,' Tegmark says.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
an hour ago
- Business Insider
Mark Cuban says hoarding talent and intellectual property will be king as the AI fight heats up
Billionaire investor Mark Cuban said the AI arms race will be won by companies that hoard top talent and lock up valuable intellectual property — and that "IP is king" as the competition heats up. "What people are missing about AI, IMO [in my opinion], is no company is going to spend what will end up being more than a trillion dollars and accept not being dominant," Cuban posted on X on Sunday. Cuban believes that as the stakes grow, companies will stop at nothing to dominate, including hiring away key people and locking up the intellectual property they produce. "We are seeing them hire away talent and IP to advance their models," he wrote in a follow-up X post. "They will start paying to lock up IP they think their models need and the people who create it, just to keep it away from their competitors." Cuban's comments come as the AI talent wars reach fever pitch. Meta has reportedly offered signing bonuses of up to $100 million, part of a $15 billion deal to recruit Scale AI's CEO, Alexandr Wang, and other top researchers. On his brother's podcast, "Uncapped with Jack Altman," last month, OpenAI CEO Sam Altman said Meta had tried to poach his best employees with nine-figure packages but had failed so far. Meta has since pushed back, with CTO Andrew Bosworth saying on CNBC's "Closing Bell: Overtime" last month that Altman " neglected to mention that he's countering those offers." But not everyone is biting. "People here are so mission-oriented," said Anthropic cofounder Benjamin Mann, who said his team turned down Meta's "mega offers." Mann told "Lenny's Podcast" on Sunday that "at Anthropic, we affect the future of humanity," whereas "my best case scenario at Meta is that we make money." Still, Cuban believes those kinds of altruistic motivations will become harder to hold onto in a landscape defined by corporate hoarding. "We are seeing them hire away talent and IP to advance their models," he wrote in a follow-up X post. "They will start paying to lock up IP they think their models need and the people who create it just to keep it away from their competitors." Cuban sees this changing the culture of research itself. "The days of publish or perish are probably over. It's now publish, and its value perishes, because it got eaten up by every foundational model," he warned. His advice: "Encrypt and silo" valuable IP, and sell it to the highest bidder — or keep it behind a paywall. "IP is KING in an AI world," Cuban wrote. "The times they are a changing." As for the broader battle brewing in the industry, he added: "They will find a way to battle. I don't know how, other than to guess it will get ugly." Perplexity CEO Aravind Srinivas echoed Cuban's prediction in an episode of the "Decoder" podcast last Thursday, likening the industry's hiring frenzy to the high-stakes world of professional sports. "It's definitely going to feel like a transfer market now, like an NBA or something," he said. "There's going to be a few individual stars who are having so much leverage."
Yahoo
3 hours ago
- Yahoo
Anthropic's cofounder says Meta's 'mega-offers' weren't enough to lure his team
Big offers from Meta weren't enough to lure members of Anthropic's team, said its cofounder. "People here are so mission-oriented," Benjamin Mann said. Big Tech has always paid well, but the AI talent war has reached new levels, BI previously reported. Not even big offers from Meta were enough to tempt the team at Anthropic, said the AI startup's cofounder, Benjamin Mann. "It's not a hard choice," Mann said on an episode of "Lenny's Podcast" published Sunday. Other AI startups have seen key talent poached by mega paydays. "I think we've been maybe much less affected than many of the other companies in the space because people here are so mission-oriented," he said. "They get these offers and then they say, 'Well, of course I'm not going to leave because my best case scenario at Meta is that we make money, and my best case at Anthropic is we affect the future of humanity." Mann also said he doesn't blame anyone who takes those "mega offers." "Other people have different life circumstances," he said. His comments come as tech giants like Meta and OpenAI engage in a fierce talent war, offering massive payouts to top AI researchers. "I'm pretty sure it's real," Mann said, referring to Meta's $100 million signing bonuses offered to AI engineers. "To pay individuals like $100 million over a four-year package, that's actually pretty cheap compared to the value created for the business," he said. "We're just in an unprecedented era of scale, and it's only going to get crazier." Mann also said that he and several other leaders left OpenAI in 2020 to start Anthropic because "safety wasn't the top priority there." A former researcher told Fortune last year that nearly half of OpenAI's safety team had exited. "People who are primarily focused on thinking about AGI safety and preparedness are being increasingly marginalized," said Daniel Kokotajlo, a former governance researcher. OpenAI said safety remains central to its mission. "Our responsibility to prepare for emerging security threats to users, customers, and global communities shapes everything we do," the company said on its website. The tech giant added that its API and ChatGPT products undergo routine third-party testing to "identify security weaknesses before they can be exploited by malicious actors." Mann, Anthropic, and OpenAI did not respond to a request for comment from Business Insider. Bidding war for top AI talent Big Tech has always paid top dollar for elite talent, but the latest AI hiring frenzy has taken things to a whole new level, as BI previously reported. It started when Meta recruited Scale's CEO, Alexandr Wang, last month as part of a $14.3 billion deal to take a 49% stake in his company. Then, Sam Altman, the CEO of OpenAI, said Meta had tried to poach his best employees with $100 million signing bonuses. Meta pushed back, saying that the signing bonuses were not that generous. Meta's CEO, Mark Zuckerberg, announced last month that Wang would co-lead a new superintelligence unit with six top researchers from OpenAI. The bidding war for top AI talent has been likened to sports franchises competing for star athletes like Cristiano Ronaldo. Perplexity's CEO, Aravind Srinivas, said on a podcast published Thursday that Big Tech companies need to ensure that employees are motivated by mission as well as money. "You're encountering new kinds of challenges. You feel a lot of growth, you're learning new things. And you're getting richer, too, along the way. Why would you want to go just because you have some guaranteed payments?" he said. Srinivas also said that he was "surprised by the magnitude" of the salaries Zuckerberg is reportedly offering to top AI researchers, adding that it "seems like it's needed at this point for them." With the massive salaries, "failure is not an option" for Meta's new team, Srinivas said. Read the original article on Business Insider Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

5 hours ago
From story time to stress relief: How AI is reshaping modern parenting
In today's fast-paced world of parenting, there's no shortage of to-do lists, lunchboxes to pack, questions to answer and emotional meltdowns to manage. But some parents are finding unexpected relief in an unlikely assistant: artificial intelligence. From virtual venting sessions to meal-planning help, moms and dads are finding creative ways to use AI tools like ChatGPT and Claude to make parenting a little easier. "I've used AI to write emails to teachers, plan birthday parties, even come up with bedtime stories," Alicia Robinson, a mom of two in Chicago, told "Good Morning America." "It's like a parenting assistant who never sleeps." Rather than replace human connection, many parents say AI offers a judgment-free zone, always available when they need it most. Karima Williams, a mom and founder of the Crash Out Diary app, which helps parents vent anonymously, told "GMA" she began using AI as a wellness tool during a particularly stressful parenting season. "My favorite way to use AI is to vent with Claude, Anthropic's AI assistant," she said. "There's something freeing about being able to express all those messy feelings that come with being a parent, whether it's being overwhelmed, feeling shame or like you're failing." Williams said she found it so helpful that she turned it into a product. Her app now allows other parents to engage with Claude in a more interactive, personalized way. "I created a project on Claude that allows me to process my emotions through a mini-game," she explained. "I choose a persona that matches my mood, then I vent everything out. Afterward, I can save my thoughts or get suggestions for calming activities, like breathing exercises or a quick game to reset." An 'indispensable' parenting tool While some parents are using AI as a personal wellness tool, others are discovering its broader impact on daily life, shifting from initial hesitation to daily use. "The idea of [using] AI felt overwhelming at first," Olivia French, a mom of three based in Fort Worth, Texas, told "GMA." "Now, the idea of not being able to use it scares me." French said she uses generative AI for planning camp lunches for her three children, ages 13, 10, and 8, and offloads meal prep for her and her husband, as well as emotional support conversation starters for her teen and tween. "ChatGPT has become an indispensable tool for parents," Leah Anise, consumer communications lead at OpenAI, told "GMA." "It helps streamline daily responsibilities and spark creativity." Whether it's generating grocery lists, creating cleaning schedules or planning meals based on what's left in the fridge, parents are tapping into AI for logistical support and much more. Beyond the basics, Anise says AI is also fueling imaginative parenting. "Parents use ChatGPT to craft calm morning routines, brainstorm scavenger hunts, and even co-write bedtime stories tailored to their kids' favorite animals or themes." Balancing support with instinct For some like Hannah Ryu, an AI strategist, co-founder of Oak Theory -- a creative technology studio specializing in UX/UI, web, application design and development -- and a mother of two, AI has helped alleviate the mental burden of motherhood. "I use it to plan trips with a baby and toddler, find healthy recipes and better understand child psychology," she told "GMA." But she's quick to point out that AI is a tool, not a replacement for parental instinct. "It still hallucinates or makes things up sometimes," Ryu said, adding, "It's really about how you use it and balancing it with your own intuition and trusted expert advice." That balance is key, especially when using AI for more sensitive parenting challenges, like understanding developmental stages or seeking guidance on emotional topics. "I don't always get the best answers back when it comes to helping my kids navigate things like failure or rejection," French said, "but [AI] helps get my mom-brain working, so I can have a place to start with harder conversations with my kids." Anise echoed that caution, noting, "ChatGPT can break down complex topics or help prepare questions for your pediatrician, but it's not a substitute for medical or mental health professionals." Confronting the 'bad parent' myth One of the biggest misconceptions, said Anise, is that turning to AI means "outsourcing" parenting. However, many parents who regularly use tools like ChatGPT say it's more about making space for presence, rather than replacing it. Ryu agreed. "AI can help take some of the mental load off so you have more time to actually be present with your kids," she said. And that support is expanding. OpenAI says it has built-in safeguards to make ChatGPT safer for families, including content restrictions and parental consent requirements for teen users. The company says it limits sensitive content generation in its multimodal products like ImageGen and Sora, Anise said. While AI tools like ChatGPT can be incredibly helpful, privacy remains a critical concern, especially when discussing your child's health, development or personal details. According to OpenAI, conversations are not used to train its models when users turn off chat history, and users can delete past conversations at any time. "We're committed to protecting our users' privacy, and give users control over their data," said Anise. However, it's still wise for parents to avoid sharing sensitive information like medical conditions, birthdays or identifying details. As AI becomes more integrated into daily life, its role in parenting is only expected to grow.