
Argonne Leverages AI and Supercomputing to Revolutionize Cancer Research
At the U.S. Department of Energy's (DOE) Argonne National Laboratory, researchers are using the power of artificial intelligence (AI) and high performance computing (HPC) to pioneer new methods that speed up drug discovery and enhance drug response prediction. Over the past decade, their efforts have evolved from creating AI tools for cancer research to evaluating the growing number of AI models, ultimately leading to their latest work aimed at drug-resistant cancer targets.
Using the new Aurora exascale system, researchers led an early science project that focused on AI-driven drug discovery for cancer. Their work demonstrated how the system's immense processing power can help accelerate the discovery of promising new drug molecules.
A decade of AI-driven innovation
The origins of Argonne's AI-driven cancer research date back to 2016, when DOE forged a partnership with National Cancer Institute (NCI) to employ advanced computing technologies in the fight against cancer. Argonne has been a key player in this collaboration, developing software and AI tools to accelerate progress in cancer research.
A cornerstone of this effort was the CANcer Distributed Learning Environment (CANDLE) project. Led by Argonne's Rick Stevens and supported by DOE's Exascale Computing Project, CANDLE's goal was to develop a scalable deep learning software stack for the nation's exascale supercomputers.
With that foundation, the lab's focus extended from building AI models to developing a rigorous method to assess the growing number of models emerging from the broader cancer research community.
This shift led to the launch of the IMPROVE (Innovative Methodologies and New Data for Predictive Oncology Model Evaluation) project in 2021. Led by Argonne in collaboration with Frederick National Laboratory for Cancer Research, the IMPROVE team set out to develop a standardized way to analyze and compare the performance of various drug response prediction models.
While the team's efforts are laying the groundwork for more reliable and effective AI models, the project continues to evolve to meet new challenges as they emerge.
The next frontier: 'undruggable' targets
Expanding on the DOE-NCI efforts, researchers are now setting their sights on a longstanding challenge in cancer research: 'undruggable' targets (proteins that are known to resist chemical treatments).
With a focus on proteins that play a key role in cancer progression, the team's work begins with a list of targets identified through lab experiments. The researchers then retrieve the protein sequences from public databases. If a protein's 3D structure is unknown, they work with scientists at the Advanced Photon Source (APS), a DOE Office of Science user facility, to determine it. The APS was recently upgraded to deliver significantly brighter X-ray beams, giving scientists a powerful tool to advance research across different fields.
After determining the protein's structure, the team turns to Aurora to simulate the behavior and interactions of the protein at the atomic level. The simulations combined with experimental data help identify areas where small molecules might bind to inhibit the protein's activity.
The computational results are then relayed to experimental collaborators to validate the findings. Adding to the challenge is the team's focus on undruggable targets. Inhibiting these proteins has eluded researchers for decades, earning them a reputation as one of the most difficult problems in cancer biology.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hill
2 hours ago
- The Hill
AI firm rolls out tool to detect nuclear weapons talk
Artificial Intelligence (AI) firm Anthropic has rolled out a tool to detect talk about nuclear weapons, it said in a Thursday blog post. 'Nuclear technology is inherently dual-use: the same physics principles that power nuclear reactors can be misused for weapons development. As AI models become more capable, we need to keep a close eye on whether they can provide users with dangerous technical knowledge in ways that could threaten national security,' Anthropic said in the blog post. 'Information relating to nuclear weapons is particularly sensitive, which makes evaluating these risks challenging for a private company acting alone,' the blog post continued. 'That's why last April we partnered with the U.S. Department of Energy (DOE)'s National Nuclear Security Administration (NNSA) to assess our models for nuclear proliferation risks and continue to work with them on these evaluations.' Anthropic said in the blog post that it was 'going beyond assessing risk to build the tools needed to monitor for it,' adding that the firm made 'an AI system that automatically categorizes content' called a 'classifier' alongside the DOE and NNSA. The system, according to the blog post, 'distinguishes between concerning and benign nuclear-related conversations with 96% accuracy in preliminary testing.' The firm also said the classifier has been used on traffic for its own artificial intelligence (AI) model Claude 'as part of our broader system for identifying misuse of our models.' 'Early deployment data suggests the classifier works well with real Claude conversations,' Anthropic added. Anthropic also announced earlier this month it would offer Claude to every federal government branch for $1 in the wake of a similar OpenAI move a few weeks ago. In a blog post, Anthropic said federal agencies would gain access to two versions of Claude.

Associated Press
5 hours ago
- Associated Press
Radiant First Developer to Sign DOE Contract for HALEU Fuel, on Track for Test Next Spring
Radiant scheduled to be the first to test a new reactor design at Demonstration of Microreactor Experiments (DOME) facility EL SEGUNDO, CALIFORNIA / ACCESS Newswire / August 21, 2025 / Radiant, the company pioneering the world's first portable, mass-produced nuclear microreactor, announced today it has officially signed a contract with the U.S. Department of Energy (DOE) to receive High-Assay Low-Enriched Uranium (HALEU) and U.S. Department of Energy sign HALEU fuel contract Radiant and U.S. Department of Energy sign HALEU fuel contract The agreement follows the DOE's April 9th announcement naming five HALEU awardees. Each awardee still needed to successfully negotiate a contract to receive HALEU. Radiant is the first to finalize such an agreement. Over the past couple of months, Radiant worked closely with DOE and other federal and private partners to negotiate specific terms and conditions. 'This agreement means the HALEU fuel can now officially be transferred, which keeps us on schedule to begin testing our Kaleidos Demonstration Unit at the DOME facility next year,' said Dr. Rita Baranwal, Chief Nuclear Officer at Radiant. 'It also keeps the country on track to deliver on the President's four executive orders signed in May to unleash America's energy independence and innovation.' Radiant is scheduled to be first to test a new reactor design at the National Reactor Innovation Center DOME facility next spring - marking the first test of a U.S.-designed advanced reactor at Idaho National Laboratory in almost 50 years. With this fuel agreement in place, Radiant continues to lead the field in delivering flexible, advanced nuclear technology to power American energy independence and national security. For more information on Radiant, visit About Radiant Radiant is building the world's first mass-produced nuclear microreactors that can go anywhere they're needed, whenever they're needed and without constant refueling. The company's first reactor, Kaleidos, is a 1 MW failsafe microreactor that can be transported anywhere power is needed. Founded in 2020, Radiant plans to test its first reactor in 2026, with initial customer deployments beginning in 2028. Radiant's mission is to mass produce the most economical and reliable portable reactors. Contact InformationRay Wert SOURCE: Radiant Industries, Incorporated press release


Axios
10 hours ago
- Axios
Google shares how much energy is used for new Gemini AI prompts
Google on Thursday unveiled measurements of energy, water use and emissions from text prompts using its Gemini Apps AI assistant — and it's calling for greater industry consistency in tallying AI's environmental effects. Why it matters: The artificial intelligence boom is bringing a surge in power-thirsty data centers, but the energy needs and climate footprint remain a moving and often hazy target. Google's overall findings are "substantially lower than many public estimates," it said. Driving the news: The tech giant released a detailed methodology that encompasses real-world electricity and water use from deploying AI at scale. That includes, for instance, energy used by idle chips and data center "overhead" — that is, equipment such as cooling that's not directly running AI workloads. Those are two of many factors covered in assessing the "full stack" of AI infrastructure, Google's new paper and blog posts explain. What they found: The median energy use of a single text prompt on Gemini is equivalent to watching TV for less than nine seconds and consumes about five drops of water, the paper finds. It emits 0.03 grams of carbon dioxide equivalent, Google said. Better software efficiency and clean energy have lowered the median energy consumption of a Gemini Apps text prompt by 33x over the last year, and the CO2 by 44x, the company said. "As the adage goes, with great power comes great responsibility," Partha Ranganathan, a Google engineer, told reporters this week. "With great computing power comes great environmental responsibility as well, and so we've been very thoughtful about the environmental impact of this increasing computing demand caused by AI," he said. Yes, but: The new analysis of text prompts doesn't cover video or image generation queries. Savannah Goodman, Google's head of advanced energy labs, said it's continuously looking to improve transparency. But there's been little consensus on how to measure the impact of even text generation, she said. "That's really the most consistent request we've gotten. And so we're really starting there with this paper," she told reporters. The paper also doesn't apply the new methodology to the training of AI models — a big part of the energy puzzle, though Google has done other research on this. The big picture: Gains in per-query efficiency come as overall AI use is rapidly expanding, and data center energy demand along with it. Estimates vary. For instance, a late 2024 DOE report projects that data centers could account for 6.7% to 12% of U.S. electricity use by 2028. Google, in a recent report, said its data center energy emissions fell by 12% in 2024. But the company's overall emissions were up 11% amid increases in greenhouse gases from its supply chain, including manufacturing and assembling AI computing hardware, and building data centers. What we're watching: Goodman hopes the analysis of "all of the critical factors" will help the industry overall. "We think that this full picture provides the most accurate view of AI's overall footprint, and by openly sharing this methodology and results, we're hoping to foster greater industry-wide consistency in calculating the impact of AI."