
OpenAI warns models with higher bioweapons risk are imminent
OpenAI cautioned Wednesday that upcoming models will head into a higher level of risk when it comes to the creation of biological weapons — especially by those who don't really understand what they're doing.
Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents.
Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company's preparedness framework.
As a result, the company said in a blog post it is stepping up the testing of such models, as well as including fresh precautions designed to keep them from aiding in the creation of biological weapons.
OpenAI didn't put an exact timeframe on when the first model to hit that threshold will launch, but head of safety systems Johannes Heidecke told Axios "We are expecting some of the successors of our o3 (reasoning model) to hit that level."
Reality check: OpenAI isn't necessarily saying that its platform will be capable of creating new types of bioweapons.
Rather, it believes that — without mitigations — models will soon be capable of what it calls "novice uplift," or allowing those without a background in biology to do potentially dangerous things.
"We're not yet in the world where there's like novel, completely unknown creation of bio threats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with."
Between the lines: One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm.
But, Heidecke acknowledged OpenAI and others need systems that are highly accurate at detecting and preventing harmful use.
"This is not something where like 99% or even one in 100,000 performance is like is sufficient," he said.
"We basically need, like, near perfection," he added, noting that human monitoring and enforcement systems need to be able to quickly identify any harmful uses that escape automated detection and then take the action necessary to "prevent the harm from materializing."
The big picture: OpenAI is not the only company warning of models reaching new levels of potentially harmful capability.
When it released Claude 4 last month, Anthropic said it was activating fresh precautions due to the potential risk of that model aiding in the spread of biological and nuclear threats.
Various companies have also been warning that it's time to start preparing for a world in which AI models are capable of meeting or exceeding human capabilities in a wide range of tasks.
What's next: OpenAI said it will convene an event next month to bring together certain nonprofits and government researchers to discuss the opportunities and risks ahead.
OpenAI is also looking to expand its work with the U.S. national labs, and the government more broadly, OpenAI policy chief Chris Lehane told Axios.
"We're going to explore some additional type of work that we can do in terms of how we potentially use the technology itself to be really effective at being able to combat others who may be trying to misuse it," Lehane said.
Lehane added that the increased capability of the most powerful models highlights "the importance, at least in my view, for the AI build out around the world, for the pipes to be really US-led."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
21 minutes ago
- Yahoo
Sam Altman says he was largely right in predicting how quickly AI would develop — but not how people would react
Sam Altman says OpenAI was largely right about its "technical predictions" regarding AI development. However, Altman expected society to "feel more different" from ChatGPT's impact by this point, he said on his brother's podcast. Going forward, he added, more people should be thinking about how to apply AI to benefit society as a whole. Sam Altman says he correctly predicted how AI would develop. There is, however, one prediction he didn't quite get right. "I feel like we've been very right on the technical predictions, and then I somehow thought society would feel more different if we actually delivered on them than it does so far," Altman, the CEO of OpenAI, said on a recent episode of "Uncapped with Jack Altman." "But I don't even — it's not even obvious that that's a bad thing." Altman believes that OpenAI has "cracked" reasoning in its models, and said its o3 LLM in particular is on par with a human being with a "Ph.D." in many subject matters. Though the trajectory of the technology has proceeded largely as expected, Altman said people aren't quite reacting as proportionately as he anticipated. "The models can now do the kind of reasoning in a particular domain you'd expect a Ph.D. in that field to be able to do," he said. "In some sense we're like, 'Oh okay, the AIs are like a top competitive programmer in the world now,' or 'AIs can get like a top score on the world's hardest math competitions,' or 'AIs can like, you know, do problems that I'd expect an expert Ph.D. in my field to do,' and we're like not that impressed. It's crazy." While AI use is on the rise, society hasn't transformed in huge leaps and bounds yet. AI is impacting businesses already, with many companies adopting AI tools and, in some cases, using them to augment or replace human labor. Altman believes the response to the technology has been relatively underwhelming when held up to what he sees as its future potential. "If I told you in 2020, 'We're going to make something like ChatGPT and it's going to be as smart as a Ph.D. student in most areas, and we're going to deploy it, and a significant fraction of the world is going to use it and kind of use it a lot,'" he said. "Maybe you would have believed that, maybe you wouldn't have." "But conditioned on that, I bet you would say 'Okay, if that happens, the world looks way more different than it does right now,'" he added. Altman acknowledges that AI is currently most useful as a sort of "co-pilot," but foresees major change if it's ever able to act autonomously — especially considering its potential applications in science. "You already hear scientists who say they're faster with AI," he said. "Like, we don't have AI maybe autonomously doing science, but if a human scientist is three times as productive using o3, that's still a pretty big deal. And then, as that keeps going and the AI can like autonomously do some science, figure out novel physics ... " In terms of risk, Altman said he isn't too concerned, despite other AI leaders — like Anthropic's Dario Amodei and DeepMind's Demis Hassabis — saying that they worry about potential catastrophic scenarios in the future. "I don't know about way riskier. I think like, the ability to make a bioweapon or like, take down a country's whole grid — you can do quite damaging things without physical stuff," he said. "It gets riskier in like sillier ways. Like, I would be afraid to have a humanoid robot walking around my house that might fall on my baby, unless I like really, really trusted it." OpenAI did not immediately respond to a request for comment by Business Insider. For now, Altman said, life remains relatively constant. But if things begin to snowball — and he believes that they will — he has no concrete idea what the world may end up looking like. "I think we will get to extremely smart and capable models — capable of discovering important new ideas, capable of automating huge amounts of work," he said. "But then I feel totally confused about what society looks like if that happens. So I'm like most interested in the capabilities questions, but I feel like maybe at this point more people should be talking about, like, how do we make sure society gets the value out of this?" Read the original article on Business Insider
Yahoo
21 minutes ago
- Yahoo
Poor grid planning could shift Europe's data centre geography, report says
By Forrest Crellin PARIS (Reuters) -Europe's leading data centre hubs face a major shift as developers will go wherever connection times are shortest, unless there is more proactive electricity grid planning, a report on Thursday by energy think-tank Ember showed. Data centre buildout has exploded in recent years as tech companies race to put together the strongest offering of competitive artificial intelligence (AI) models, which rely on a new generation of power-hungry data centres. This could lead to a geographical shift in investment in Europe as developers look for new places with easier power access and shorter lead times, the report said. By 2035, half of Europe's data centre capacity could be located outside the current main hubs Frankfurt, London, Amsterdam, Paris and Dublin, the report said. This could leech billions of euros in investments from the congested countries, as data centres in Germany contributed 10.4 billion euros ($12 billion) in GDP in 2024 which should more than double by 2029, and it could slow job growth, it said. Only France, is expected to maintain continued data centre investment as the grid remains relatively unconstrained, the report said. Connecting a new data centre to the grid in legacy hubs can take an average of 7–10 years, with some projects facing delays of up to 13 years, the report said. However, wait times in newer markets are much shorter, with Italy taking just three years, it said. "Grids are ultimately deciding where investments go ... they are now effectively a tool to attract investment," said Elisabeth Cremona, Senior Energy Analyst at Ember. "In Europe's push for competitiveness and economic growth, it now needs to be taking into account grids and driving investment to that infrastructure if it wants to see other projects materialise," she said. She added that this is not unique to data centres but covers all industry, as any kind of industry that is either new or looking to electrify is going to go through the same process. In Sweden, Norway and Denmark, data centre electricity demand is expected to triple already by 2030. In Austria, Greece, Finland, Hungary, Italy, Portugal and Slovakia data centre consumption is projected to increase by three to five times by 2035 compared to 2024. ($1 = 0.8692 euros)
Yahoo
21 minutes ago
- Yahoo
Amazon CEO Andy Jassy just got brutally honest about AI — and other bosses may follow his lead
Amazon CEO Andy Jassy told employees that AI will reduce head count as the company gets "scrappier." Management commentators say his memo might open the floodgates for more AI job-loss talk. Vague AI messaging risks losing top talent, one workplace commentator told BI. Amazon's CEO just said the quiet part out loud: AI is coming for plenty of jobs — and other bosses may soon follow his lead. On Tuesday, Andy Jassy said in a memo that employees should figure out "how to get more done with scrappier teams" and that the move toward AI would eventually "reduce our total corporate workforce." Amazon, with about 1.5 million workers, is the second-largest private employer in the US. Workplace commentators told Business Insider that Jassy's candor may prompt other leaders to feel comfortable telling their employees who — or what — will replace them. Marlo Lyons, an author and certified executive coach, said Amazon's directness might encourage other companies to follow suit. "I think if you have a big company that's talking about AI, then it does make it easier for smaller companies to talk about AI — this is basically culture modeling," she told BI. "In some ways, it might scare you, but at the same time, it should make you say, 'OK, at least my company's being honest to me about it,'" Lyons said. Other CEOs have also become increasingly transparent about AI expectations, although few have explicitly said it would reduce their existing workforce. Shopify CEO Tobi Lütke said in a memo in April that "AI usage is now a baseline expectation," and that before managers make a hire, they must first prove that AI couldn't do the job better. Klarna CEO Sebastian Siemiatkowski said in December last year that the fintech had stopped hiring because "AI can already do all of the jobs that we as humans do." Meanwhile, OpenAI CEO Sam Altman said earlier this month that AI agents were already beginning to act like junior-level coworkers and may soon be able to deliver business solutions. "It'll send shivers down the backs of employees," said Cary Cooper, professor of organizational psychology and health at Manchester Business School in the UK, of the Jassy memo. "I think it'll open it up for HR to now have discussions with senior management about how we deal with this — the introduction of AI in our business." Cooper warned companies should be specific with staff about which jobs might be affected and what retraining opportunities are available, or risk "regrettable turnover" — losing the talent they most want to keep. Thomas Roulet, professor of organisational sociology and leadership at the University of Cambridge, told BI that linking layoffs with AI was not new — even if Jassy's openness felt like a turning point. "Firms do not hesitate to use AI as a reason to downsize, whether it is an excuse or an opportunity," he said. "Very often, they downsize before even thinking what they will replace with AI, due to market pressures." "AI is a great scapegoat for a lot of unpopular strategic choices at the moment," Roulet added. "There is enormous pressure on companies to show that they are able to replace employees with AI tools," Peter Cappelli, a professor of management at The Wharton School, told BI. "But the evidence indicates that it is very difficult to do so." Klarna, for example, made headlines in 2022 when the company laid off 700 employees, mostly customer service agents, in favor of AI. In May, the financial services company had to hire some back to improve its services. In Roulet's view, many companies that have already cut jobs in favor of AI were moving too fast. "Unfortunately, many firms think of workforce reduction and engage with such reductions before they even think about AI replacement," said Roulet. "The reality is that bringing in AI into work takes a lot of learning cycles and trial and error — it does not appear clearly overnight." Read the original article on Business Insider