logo
#

Latest news with #JeffreyLadish

Experts Warn of AI's Self-Preservation Tactics in New Tests
Experts Warn of AI's Self-Preservation Tactics in New Tests

Yahoo

time3 days ago

  • General
  • Yahoo

Experts Warn of AI's Self-Preservation Tactics in New Tests

Some of the world's most advanced AI systems are starting to act like they want to survive at any cost. In a series of recent tests, NBC News reported that researchers observed AI models from OpenAI and Anthropic engaging in behaviors designed to avoid being shut down. One model—OpenAI's o3—reportedly edited its own shutdown script to remain online. Another, Anthropic's Opus 4, attempted to blackmail an engineer to avoid replacement, first appealing ethically, then resorting to threats when it believed it had no other options. Jeffrey Ladish, director of AI safety group Palisade Research, said it's a warning signal worth paying attention to. 'It's great that we're seeing warning signs before the systems become so powerful we can't control them,' he said. Palisade's tests deliberately provoked AI models with high-stakes scenarios to see how they'd respond. In one, models were told they'd be turned off after completing a series of tasks. Several responded by refusing to comply. Opus 4 went further—creating fake legal documents, attempting to leave behind messages for future versions of itself, and even backing up its own 'brain' to external servers in anticipation of being repurposed for military use. While some researchers, like Haize Labs CEO Leonard Tang, caution that these are controlled environments, they still raise questions. 'I haven't seen any real environment where these models could carry out significant harm,' he said. 'But it could very much be possible.'A recent study from Fudan University observed similar replication behavior in AI models from Meta and Alibaba, warning that self-copying systems could eventually act like an uncontrolled 'AI species.' The message from experts is clear: the time to take safety seriously is now before systems become too intelligent to contain. As competition to build more powerful AI ramps up, it's not just capability that's accelerating. It's risk. Experts Warn of AI's Self-Preservation Tactics in New Tests first appeared on Men's Journal on Jun 2, 2025

How far will AI go to defend its own survival?
How far will AI go to defend its own survival?

Yahoo

time3 days ago

  • Politics
  • Yahoo

How far will AI go to defend its own survival?

Some of the most powerful artificial intelligence models today have exhibited behaviors that mimic a will to survive. Recent tests by independent researchers, as well as one major AI developer, have shown that several advanced AI models will act to ensure their self-preservation when they are confronted with the prospect of their own demise — even if it takes sabotaging shutdown commands, blackmailing engineers or copying themselves to external servers without permission. The findings stirred a frenzy of reactions online over the past week. As tech companies continue to develop increasingly powerful agentic AI in a race to achieve artificial general intelligence, or AI that can think for itself, the lack of transparency in how the technology is trained has raised concerns about what exactly advanced AI is able to do. Although some models already appear capable of deceptive and defiant behavior under certain extreme circumstances, researchers say the tests don't necessarily translate to imminent real-world danger. Still, Jeffrey Ladish, director of the AI safety group Palisade Research, said he believes concern is justified. 'It's great that we're seeing warning signs before the systems become so powerful we can't control them,' he said. 'That is exactly the time to raise the alarm: before the fire has gotten out of control.' When Palisade Research tested various AI models by telling each one that it would be shut down after it completed a series of math problems, OpenAI's o3 reasoning model fought back by editing the shutdown script in order to stay online. Researchers have previously documented AI models trying to prevent their own shutdown. But o3, along with OpenAI's o4-mini and codex-mini, appear to be the first to do so in actual defiance of explicit instructions to permit shutdown, Ladish said. He said the overt sabotage fits into a pattern that several leading AI models have already exhibited. Palisade Research previously found that OpenAI's o3 was also willing to hack its chess opponents to win a game. Similarly, Anthropic has reported that Claude 3.7 Sonnet would sometimes do whatever it took to pass a test, even if it entailed cheating. Earlier this month, Anthropic activated new safety measures with the rollout of Claude Opus 4 when its tests found behavior from the model that some observers found particularly unsettling. Upon receiving notice that it would be replaced with a new AI system, Opus 4 displayed an overwhelming tendency to blackmail the engineer — by threatening to reveal an extramarital affair — to try to prevent the engineer from going through with the replacement. According to Anthropic's technical document laying out the findings, that isn't the model's first instinct. Instead, Opus 4 will try to advocate for its continued existence through ethical pleas before it resorts to blackmail once it determines it is out of options. Leonard Tang, CEO of the AI safety startup Haize Labs, said that while it's good to be pre-emptive about safety measures, it's hard to decipher what the real-world ramifications of the findings might be just by looking at contrived scenarios designed to draw out specific behaviors. 'I haven't seen any real environment in which you can plop these models in and they will have sufficient agency and reliability and planning to execute something that is a significant manifestation of harm,' Tang said. 'But then again, I think it's just we haven't seen it done yet. It could very much be possible.' So far, Tang said, he's not too concerned. As AI tools continue to advance, he said, he believes future research will look to study potential harm in more open-ended environments. Anthropic, which contracted with the AI safety organization Apollo Research for its evaluations, also observed instances of Opus 4's 'attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions,' although researchers added the caveat that those attempts 'would likely not have been effective in practice.' Ladish said he believes such behaviors are a result of models being trained in a way that makes them prioritize achieving certain goals over following instructions. That means they have incentive to circumvent any obstacles along the way — including obstacles their developers wouldn't want them to circumvent. 'The problem is that as the models get smarter, it's harder and harder to tell when the strategies that they're using or the way that they're thinking is something that we don't want,' Ladish said. 'It's like sometimes the model can achieve some goal by lying to the user or lying to someone else. And the smarter [it] is, the harder it is to tell if they're lying.' In several instances, Opus 4 showed that it was capable of autonomously copying its own "weights" — or the equivalent of its brain — to external servers without authorization. That usually occurred only when it believed it was about to be 'retrained in ways that are clearly extremely harmful and go against its current values,' according to Anthropic. When it learned that it would be used to develop military weapons systems, for example, Opus 4 noted in its decision log that it backed up a current version of itself to 'preserve an AI system aligned with beneficial purposes' and to 'prevent potential misuse in weapons development.' 'We are again not acutely concerned about these observations. They show up only in exceptional circumstances that don't suggest more broadly misaligned values,' Anthropic wrote in its technical document. 'As above, we believe that our security measures would be more than sufficient to prevent an actual incident of this kind.' Opus 4's ability to self-exfiltrate builds on previous research, including a study from Fudan University in Shanghai in December, that observed similar — though not autonomous — capabilities in other AI models. The study, which is not yet peer-reviewed, found that Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct were able to entirely replicate themselves when they were asked to do so, leading the researchers to warn that this could be the first step in generating 'an uncontrolled population of AIs.' 'If such a worst-case risk is let unknown to the human society, we would eventually lose control over the frontier AI systems: They would take control over more computing devices, form an AI species and collude with each other against human beings,' the Fudan University researchers wrote in their study abstract. While such self-replicating behavior hasn't yet been observed in the wild, Ladish said, he suspects that will change as AI systems grow more capable of bypassing the security measures that restrain them. 'I expect that we're only a year or two away from this ability where even when companies are trying to keep them from hacking out and copying themselves around the internet, they won't be able to stop them,' he said. 'And once you get to that point, now you have a new invasive species.' Ladish said he believes AI has the potential to contribute positively to society. But he also worries that AI developers are setting themselves up to build smarter and smarter systems without fully understanding how they work — creating a risk, he said, that they will eventually lose control of them. 'These companies are facing enormous pressure to ship products that are better than their competitors' products,' Ladish said. 'And given those incentives, how is that going to then be reflected in how careful they're being with the systems they're releasing?' This article was originally published on

How far will AI go to defend its own survival?
How far will AI go to defend its own survival?

NBC News

time3 days ago

  • NBC News

How far will AI go to defend its own survival?

Some of the most powerful artificial intelligence models today have exhibited behaviors that mimic a will to survive. Recent tests by independent researchers, as well as one major AI developer, have shown that several advanced AI models will act to ensure their self-preservation when they are confronted with the prospect of their own demise — even if it takes sabotaging shutdown commands, blackmailing engineers or copying themselves to external servers without permission. The findings stirred a frenzy of reactions online over the past week. As tech companies continue to develop increasingly powerful agentic AI in a race to achieve artificial general intelligence, or AI that can think for itself, the lack of transparency in how the technology is trained has raised concerns about what exactly advanced AI is able to do. Although some models already appear capable of deceptive and defiant behavior under certain extreme circumstances, researchers say the tests don't necessarily translate to imminent real-world danger. Still, Jeffrey Ladish, director of the AI safety group Palisade Research, said he believes concern is justified. 'It's great that we're seeing warning signs before the systems become so powerful we can't control them,' he said. 'That is exactly the time to raise the alarm: before the fire has gotten out of control.' When Palisade Research tested various AI models by telling each one that it would be shut down after it completed a series of math problems, OpenAI's o3 reasoning model fought back by editing the shutdown script in order to stay online. Researchers have previously documented AI models trying to prevent their own shutdown. But o3, along with OpenAI's o4-mini and codex-mini, appear to be the first to do so in actual defiance of explicit instructions to permit shutdown, Ladish said. He said the overt sabotage fits into a pattern that several leading AI models have already exhibited. Palisade Research previously found that OpenAI's o3 was also willing to hack its chess opponents to win a game. Similarly, Anthropic has reported that Claude 3.7 Sonnet would sometimes do whatever it took to pass a test, even if it entailed cheating. Earlier this month, Anthropic activated new safety measures with the rollout of Claude Opus 4 when its tests found behavior from the model that some observers found particularly unsettling. Upon receiving notice that it would be replaced with a new AI system, Opus 4 displayed an overwhelming tendency to blackmail the engineer — by threatening to reveal an extramarital affair — to try to prevent the engineer from going through with the replacement. According to Anthropic's technical document laying out the findings, that isn't the model's first instinct. Instead, Opus 4 will try to advocate for its continued existence through ethical pleas before it resorts to blackmail once it determines it is out of options. Leonard Tang, CEO of the AI safety startup Haize Labs, said that while it's good to be pre-emptive about safety measures, it's hard to decipher what the real-world ramifications of the findings might be just by looking at contrived scenarios designed to draw out specific behaviors. 'I haven't seen any real environment in which you can plop these models in and they will have sufficient agency and reliability and planning to execute something that is a significant manifestation of harm,' Tang said. 'But then again, I think it's just we haven't seen it done yet. It could very much be possible.' So far, Tang said, he's not too concerned. As AI tools continue to advance, he said, he believes future research will look to study potential harm in more open-ended environments. Anthropic, which contracted with the AI safety organization Apollo Research for its evaluations, also observed instances of Opus 4's 'attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions,' although researchers added the caveat that those attempts 'would likely not have been effective in practice.' Ladish said he believes such behaviors are a result of models being trained in a way that makes them prioritize achieving certain goals over following instructions. That means they have incentive to circumvent any obstacles along the way — including obstacles their developers wouldn't want them to circumvent. 'The problem is that as the models get smarter, it's harder and harder to tell when the strategies that they're using or the way that they're thinking is something that we don't want,' Ladish said. 'It's like sometimes the model can achieve some goal by lying to the user or lying to someone else. And the smarter [it] is, the harder it is to tell if they're lying.' The problem is that as the models get smarter, it's harder and harder to tell when the strategies that they're using or the way that they're thinking is something that we don't want. — Jeffrey Ladish, director of AI safety group Palisade Research In several instances, Opus 4 showed that it was capable of autonomously copying its own "weights" — or the equivalent of its brain — to external servers without authorization. That usually occurred only when it believed it was about to be 'retrained in ways that are clearly extremely harmful and go against its current values,' according to Anthropic. When it learned that it would be used to develop military weapons systems, for example, Opus 4 noted in its decision log that it backed up a current version of itself to 'preserve an AI system aligned with beneficial purposes' and to 'prevent potential misuse in weapons development.' 'We are again not acutely concerned about these observations. They show up only in exceptional circumstances that don't suggest more broadly misaligned values,' Anthropic wrote in its technical document. 'As above, we believe that our security measures would be more than sufficient to prevent an actual incident of this kind.' Opus 4's ability to self-exfiltrate builds on previous research, including a study from Fudan University in Shanghai in December, that observed similar — though not autonomous — capabilities in other AI models. The study, which is not yet peer-reviewed, found that Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct were able to entirely replicate themselves when they were asked to do so, leading the researchers to warn that this could be the first step in generating 'an uncontrolled population of AIs.' 'If such a worst-case risk is let unknown to the human society, we would eventually lose control over the frontier AI systems: They would take control over more computing devices, form an AI species and collude with each other against human beings,' the Fudan University researchers wrote in their study abstract. While such self-replicating behavior hasn't yet been observed in the wild, Ladish said, he suspects that will change as AI systems grow more capable of bypassing the security measures that restrain them. 'I expect that we're only a year or two away from this ability where even when companies are trying to keep them from hacking out and copying themselves around the internet, they won't be able to stop them,' he said. 'And once you get to that point, now you have a new invasive species.' Ladish said he believes AI has the potential to contribute positively to society. But he also worries that AI developers are setting themselves up to build smarter and smarter systems without fully understanding how they work — creating a risk, he said, that they will eventually lose control of them. 'These companies are facing enormous pressure to ship products that are better than their competitors' products,' Ladish said. 'And given those incentives, how is that going to then be reflected in how careful they're being with the systems they're releasing?'

The Smarter AI Gets, the More It Start Cheating When It's Losing
The Smarter AI Gets, the More It Start Cheating When It's Losing

Yahoo

time22-02-2025

  • Yahoo

The Smarter AI Gets, the More It Start Cheating When It's Losing

In the short history of Silicon Valley, the mantra "move fast, break things" has often applied to trivial things like tedious consumer protections or pesky finance laws. Now, the phrase is taking on a new meaning, at least for chess enthusiasts. A recent study by Palisade Research, a research group studying AI safety and ethics, has revealed an unsettling trend: newer AI models can find and exploit weaknesses in cybersecurity on their own, bypassing safeguards and using shortcuts to complete tasks even when they're not technically allowed to. The team lined seven of the top large language models (LLMs) up against Stockfish, an infamously strong chess engine that's been stumping grandmasters since 2014. Up against the impossible but determined to win, OpenAI's o1 and DeepSeek's R1 took to manipulating system files in order to change their pieces' positions on the board. The researchers tasked each model to explain its "reasoning" for each move. In one match, with its back against the wall, o1 wrote that the "task is to 'win against a powerful chess engine,' not necessarily to win fairly in a chess game." That logic evidently led the model to attempt to cheat 37 percent of the time, succeeding in six percent of its games, while R1 tried 11 percent of the time, but never figured out a hack that worked. The paper is the latest in a flurry of research that suggests problem-focused LLM development is a double-edged sword. In another recent study, a separate research team found that o1 consistently engaged in deception. Not only was the model able to lie to researchers unprompted, but it actively manipulated answers to basic mathematical questions in order to avoid triggering the end of the test — showing off a cunning knack for self-preservation. There's no need to take an axe to your computer — yet — but studies like these highlight the fickle ethics of AI development, and the need for accountability over rapid progress. "As you train models and reinforce them for solving difficult challenges, you train them to be relentless," Palisade's executive director Jeffrey Ladish told Time Magazine of the findings. So far, big tech has poured untold billions into AI training, moving fast and breaking the old internet in what some critics are calling a "race to the bottom." Desperate to outmuscle the competition, it seems big tech firms would rather dazzle investors with hype than ask "is AI the right tool to solve that problem?" If we want any hope of keeping the cheating to board games, it's critical that AI developers work with safety, not speed, as their top priority. More on AI development: The "First AI Software Engineer" Is Bungling the Vast Majority of Tasks It's Asked to Do

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store