How AI could help to prepare for future pandemics according to Oxford scientists
Artificial intelligence (AI) can be used to help society prepare for future pandemics, researchers from the University of Oxford say.
Scientists from across the globe, including Africa, America, Asia, Australia, and Europe, have outlined how AI can change the landscape of infectious disease research and improve preparedness for pandemics.
The study highlights the importance of safety, accountability, and ethics in the use of AI for infectious disease research.
The researchers are calling for a collaborative and transparent approach in terms of datasets and AI models.
Lead author Professor Moritz Kraemer, from the University of Oxford's Pandemic Sciences Institute, said: "In the next five years, AI has the potential to transform pandemic preparedness.
The scientists warned against relying solely on AI to deal with the challenges of a pandemic (Image: PA) "It will help us better anticipate where outbreaks will start and predict their trajectory, using terabytes of routinely collected climatic and socio-economic data.
"It might also help predict the impact of disease outbreaks on individual patients by studying the interactions between the immune system and emerging pathogens.
"Taken together and if integrated into countries' pandemic response systems, these advances will have the potential to save lives and ensure the world is better prepared for future pandemic threats."
The research identifies several opportunities for AI in pandemic preparedness, including improving current models of disease spread, pinpointing areas of high-transmission potential, and enhancing genetic data in disease surveillance.
AI could also help to determine the properties of new pathogens, predict their traits, and identify whether cross-species jumps are likely, as well as predict which new variants of already-circulating pathogens might arise.
However, the scientists caution against relying solely on AI to solve infectious disease challenges.
They suggest integrating human feedback into AI modelling workflows could help to overcome existing limitations.
The authors also express concerns about the quality and representativeness of training data, the limited accessibility of AI models to the wider community, and potential risks associated with the deployment of black-box models for decision making.
Study author Professor Eric Topol, founder and director of the Scripps Research Translational Institute, said: "While AI has remarkable transformative potential for pandemic mitigation, it is dependent upon extensive worldwide collaboration and from comprehensive, continuous surveillance data inputs."
Study lead author Samir Bhatt, from the University of Copenhagen and Imperial College London, added: "Infectious disease outbreaks remain a constant threat, but AI offers policymakers a powerful new set of tools to guide informed decisions on when and how to intervene."
The authors suggest rigorous benchmarks to evaluate AI models, advocating for strong collaborations between government, society, industry, and academia for the sustainable and practical development of models for improving human health.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
28 minutes ago
- Yahoo
This AI Company Wants Washington To Keep Its Competitors Off the Market
Dario Amodei, CEO of the artificial intelligence company Anthropic, published a guest essay in The New York Times Thursday arguing against a proposed 10-year moratorium on state AI regulation. Amodei argues that a patchwork of regulations would be better than no regulation whatsoever. Skepticism is warranted whenever the head of an incumbent firm calls for more regulation, and this case is no different. If Amodei gets his way, Anthropic would face less competition—to the detriment of AI innovation, AI security, and the consumer. Amodei's op-ed came in a response to a provision of the so-called One Big Beautiful Bill Act, which would prevent any states, cities, and counties from enforcing any regulation that specifically targets AI models, AI systems, or automated decision systems for 10 years. Senate Republicans have amended the clause from a simple requirement to a condition for receiving federal broadband funds, in order to comply with the Byrd Rule, which in Politico's words "blocks anything but budgetary issues from inclusion in reconciliation." Amodei begins by describing how, in a recent stress test conducted at his company, a chatbot threatened an experimenter to forward evidence of his adultery to his wife unless he withdrew plans to shut the AI down. The CEO also raises more tangible concerns, such as reports that a version of Google's Gemini model is "approaching a point where it could help people carry out cyberattacks." Matthew Mittelsteadt, a technology fellow at the Cato Institute, tells Reason that the stress test was "very contrived" and that "there are no AI systems where you must prompt it to turn it off." You can just turn it off. He also acknowledges that, while there is "a real cybersecurity danger [of] AI being used to spot and exploit cyber-vulnerabilities, it can also be used to spot and patch" them. Outside of cyberspace and in, well, actual space, Amodei sounds the alarm that AI could acquire the ability "to produce biological and other weapons." But there's nothing new about that: Knowledge and reasoning, organic or artificial—ultimately wielded by people in either case—can be used to cause problems as well as to solve them. An AI that can model three-dimensional protein structures to create cures for previously untreatable diseases can also create virulent, lethal pathogens. Amodei recognizes the double-edged nature of AI and says voluntary model evaluation and publication are insufficient to ensure that benefits outweigh costs. Instead of a 10-year moratorium, Amodei calls on the White House and Congress to work together on a transparency standard for AI companies. In lieu of federal testing standards, Amodei says state laws should pick up the slack without being "overly prescriptive or burdensome." But that caveat is exactly the kind of wishful thinking Amodei indicts proponents of the moratorium for: Not only would 50 state transparency laws be burdensome, says Mittelsteadt, but they could "actually make models less legible." Neil Chilson of the Abundance Institute also inveighed against Amodei's call for state-level regulation, which is much more onerous than Amodei suggests. "The leading state proposals…include audit requirements, algorithmic assessments, consumer disclosures, and some even have criminal penalties," Chilson tweeted, so "the real debate isn't 'transparency vs. nothing,' but 'transparency-only federal floor vs. intrusive state regimes with audits, liability, and even criminal sanctions.'" Mittelsteadt thinks national transparency regulation is "absolutely the way to go." But how the U.S. chooses to regulate AI might not have much bearing on Skynet-doomsday scenarios, because, while America leads the way in AI, it's not the only player in the game. "If bad actors abroad create Amodei's theoretical 'kill everyone bot,' no [American] law will matter," says Mittelsteadt. But such a law can "stand in the way of good actors using these tools for defense." Amodei is not the only CEO of a leading AI company to call for regulation. In 2023, Sam Altman, co-founder and then-CEO of Open AI, called on lawmakers to consider "intergovernmental oversight mechanisms and standard-setting" of AI. In both cases and in any others that come along, the public should beware of calls for AI regulation that will foreclose market entry, protect incumbent firms' profits from being bid away by competitors, and reduce the incentives to maintain market share the benign way: through innovation and product differentiation. The post This AI Company Wants Washington To Keep Its Competitors Off the Market appeared first on


E&E News
28 minutes ago
- E&E News
Committee explores nuclear solutions to AI demand
House Science, Space and Technology Committee lawmakers will meet this week to discuss how nuclear energy could help meet a projected surge in demand from artificial intelligence operations. The Energy Subcommittee hearing — to be led by Chair Randy Weber (R-Texas) — continues Republicans' early focus and significant concern regarding supply and demand in the 119th Congress. They believe baseload energy sources, such as nuclear and fossil fuels, need to be built at a rapid pace to offset a surge in intermittent, renewable energy generation that could put grid reliability at risk. Indeed, transmission providers are forecasting an 8.2 percent growth in electricity load over the next five years primarily due to AI data center proliferation. That's equivalent to hooking up nearly 50 million homes to the grid by 2029. Advertisement But whether nuclear energy can actually meet that demand remains a point of debate among energy and policy experts.

Yahoo
34 minutes ago
- Yahoo
Anthropic's AI-generated blog dies an early death
Claude's blog is no more. A week after TechCrunch profiled Anthropic's experiment to task the company's Claude AI models with writing blog posts, Anthropic wound down the blog and redirected the address to its homepage. Sometime over the weekend, the Claude Explains blog disappeared — along with its initial few posts. A source familiar tells TechCrunch the blog was a "pilot" meant to help Anthropic's team combine customer requests for explainer-type "tips and tricks" content with marketing goals. Claude Explains, which had a dedicated page on Anthropic's website and was edited for accuracy by humans, was populated by posts on technical topics related to various Claude use cases (e.g. 'Simplify complex codebases with Claude'). The blog, which was intended to be a showcase of sorts for Claude's writing abilities, wasn't clear about how much of Claude's raw writing was making its way into each post. An Anthropic spokesperson previously told TechCrunch that the blog was overseen by "subject matter experts and editorial teams" who 'enhance[d]' Claude's drafts with 'insights, practical examples, and […] contextual knowledge.' The spokesperson also said Claude Explains would expand to topics ranging from creative writing to data analysis to business strategy. Apparently, those plans changed in pretty short order. "[Claude Explains is a] demonstration of how human expertise and AI capabilities can work together,' the spokesperson told TechCrunch earlier this month. "[The blog] is an early example of how teams can use AI to augment their work and provide greater value to their users. Rather than replacing human expertise, we're showing how AI can amplify what subject matter experts can accomplish." Claude Explains didn't get the rosiest reception on social media, in part due to the lack of transparency about which copy was AI-generated. Some users pointed out it looked a lot like an attempt to automate content marketing, an ad tactic that relies on generating content on popular topics to serve as a funnel for potential customers. More than 24 websites were linking to Claude Explains posts before Anthropic wound down the pilot, according to search engine optimization tool Ahrefs. That's not bad for a blog that was only live for around a month. Anthropic might've also grown wary of implying Claude performs better at writing tasks than is actually the case. Even the best AI today is prone to confidently making things up, which has led to embarrassing gaffes on the part of publishers that have publicly embraced the tech. For example, Bloomberg has had to correct dozens of AI-generated summaries of its articles, and G/O Media's error-riddled AI-written features — published against editors' wishes — attracted widespread ridicule. This article originally appeared on TechCrunch at Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data