DeepMind's AlphaGenome Uses AI to Decipher Noncoding DNA for Research, Personalized Medicine
DeepMind's newly released technology could transform how we treat genetic diseases. Though scientists long dismissed noncoding DNA as 'junk,' we now know this so-called dark matter controls when and how genes turn on or off. AlphaGenome shows promise in predicting how mutations in these regions cause diseases—from certain cancers to rare disorders where crucial proteins never get made. By revealing these hidden control switches, AlphaGenome could help researchers design therapies that target genetic conditions, potentially aiding millions of people.
But to understand the complexity of the task for which AlphaGenome was created, one must consider how the definition of a 'gene' has evolved. The term, coined in 1909 to describe invisible units of heredity (as proposed by Gregor Mendel in 1865) initially carried no molecular baggage. But by the 1940s, the 'one gene, one enzyme' idea took hold. And by the 1960s, textbooks taught that for a stretch of DNA to be properly called a gene, it had to code for a specific protein.
[Sign up for Today in Science, a free daily newsletter]
Over the past two decades, the definition has broadened with the discoveries of genes that code for the numerous types of RNAs that don't get translated into proteins. Today a gene is considered to be any DNA segment whose RNA or protein product performs a biological function. This conceptual shift underscores the genome's real estate map: Only about 1 to 2 percent of human DNA directly codes for proteins. But with the broader definition, roughly 40 percent is gene territory.
What remains unaccounted for is significant: more than a billion units of code that can determine how and how often genes get activated. Because relevant clues lie far apart and play out through complex cycles of gene regulation, decoding them has been among biology's hardest challenges. AlphaGenome's goal is to understand how these regions affect gene expression—and how even tiny changes can tilt the entire body's balance between health and disease. To do so, the AI system uses a DNA sequence with a length of up to one million letters as input—and 'predicts thousands of molecular properties characterising its regulatory activity,' according to a statement issued by DeepMind.
Already, AlphaGenome has replicated results from genetics labs. In a June 2025 preprint study (which has yet to be peer-reviewed), AlphaGenome's team described using the model to run a simulation that mirrored known DNA interactions: mutations that act like rogue light switches by cranking a gene into overdrive in a certain type of leukemia. When AlphaGenome simulated interactions on a stretch of DNA containing both the gene and the mutation, it predicted the same complex chain of events that were already observed in lab experiments.
Though AlphaGenome is currently available only for noncommercial testing, responses in the scientific community have been enthusiastic so far, with both biotech start-ups and university researchers publicly expressing excitement about the system's potential to accelerate research.
Limits remain. AlphaGenome struggles to capture interactions that are more than 100,000 DNA letters away, can miss some tissue-specific nuances and is not designed to predict traits from a complete personal genome. Complex diseases that depend on development or environment also lie outside its direct scope. The system does suggest wide-ranging uses, however: By tracing how minute changes ripple through gene regulation, it could pinpoint the roots of genetic disorders. It could help in the design of synthetic DNA. And above all, it could offer a faster way to chart the genome's complex regulatory circuitry.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
4 hours ago
- Yahoo
Humans beat AI at annual math Olympiad, but the machines are catching up
Sydney — Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, but the programs reached gold-level scores for the first time, and the rate at which they are improving may be cause for some human introspection. Neither of the AI models scored full marks — unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said Monday that an advanced version of its Gemini chatbot had solved five out of the six math problems set at the IMO, held in Australia's Queensland this month. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points - a gold medal score," the U.S. tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10% of human contestants won gold-level medals, and five received perfect scores of 42 points. U.S. ChatGPT maker OpenAI said its experimental reasoning model had also scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition," OpenAI researcher Alexander Wei said in a social media post. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the city of Bath, in southwest England, solving four of the six problems. That took two to three days of computation — far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems," the same ones faced by 641 competing students from 112 countries. "It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar. Contest organizers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he noted. In an interview with CBS' 60 Minutes earlier this year, one of Google's leading AI researchers predicted that within just five to 10 years, computers would be made that have human-level cognitive abilities — a landmark known as "artificial general intelligence." Google DeepMind CEO Demis Hassabis predicted that AI technology was on track to understand the world in nuanced ways, and to not only solve important problems, but even to develop a sense of imagination, within a decade, thanks to an increase in investment. "It's moving incredibly fast," Hassabis said. "I think we are on some kind of exponential curve of improvement. Of course, the success of the field in the last few years has attracted even more attention, more resources, more talent. So that's adding to the, to this exponential progress." Detroit lawnmower gang still going strong after 15 years Legendary singer Ozzy Osbourne dies at 76 Sneak peek: The Case of the Black Swan (Part 1) Solve the daily Crossword


Entrepreneur
11 hours ago
- Entrepreneur
Google, OpenAI Models Win Gold at High School Math Contest
The International Math Olympiad is one of the most challenging high school-level math competitions. AI just scored a major win at an international math competition. For the first time, AI models from Google DeepMind and OpenAI achieved gold medal status at the 2025 International Math Olympiad (IMO), a challenging math contest for high school students that has been held annually since 1959. The competition involves two 4.5-hour exams to solve six total problems, without the help of the Internet or external tools. Related: The CEO of Google's AI Initiative Is Worried About 2 Things, and Neither Is AI Replacing Jobs The New York Times reports that OpenAI and Google's AI models responded to questions using natural language with no human intervention. Both models were able to solve five of the six problems presented at the 2025 competition within the contest's time restraints, marking the first time AI models have achieved such a level of success. The two models tied in score, with each earning 35 points out of a possible 42 points on the IMO, exactly at the cutoff point for a gold medal. OpenAI announced the results on Saturday while Google waited until Monday. Google DeepMind worked with IMO to have its AI system's performance graded and certified by the committee this year, while OpenAI did not formally enter the competition. Instead, OpenAI asked three former IMO medalists to independently grade its AI model's answers to each question, finalizing scores after "unanimous consensus." According to the Google announcement, only 8% of the high school students who compete in IMO typically receive a gold medal. Google's gold-medal performance this year was one step above its results last year, when its AI received a silver medal, solving four out of the six problems presented in the competition. Related: How a Love of Chess Led the CEO of Google's DeepMind to a Career in AI — and a Nobel Prize IMO's President, Dr. Gregor Dolinar, called Google DeepMind's solutions this year "astonishing in many respects," while IMO graders found [the solutions] to be "clear, precise, and most of them easy to follow," Dolinar stated. OpenAI CEO Sam Altman said in a post on X on Saturday that while OpenAI does not plan to release an AI model with IMO gold capabilities "for many months," the gold medal was "a significant marker of how far AI has come over the past decade." OpenAI used a general-purpose reasoning system to tackle the competition, not a specialized math system, as the company works towards general intelligence. we achieved gold medal level performance on the 2025 IMO competition with a general-purpose reasoning system! to emphasize, this is an LLM doing math and not a specific formal math system; it is part of our main push towards general intelligence. when we first started openai,… — Sam Altman (@sama) July 19, 2025 Meanwhile, Google DeepMind CEO Demis Hassabis wrote in a post on X on Monday that Google also used an advanced version of its general-purpose Gemini reasoning model, which will be available "to a set of trusted testers" before rolling it out to Google AI Ultra subscribers, who pay $250 per month for advanced capabilities and 30 TB of storage. We achieved this year's impressive result using an advanced version of Gemini Deep Think (an enhanced reasoning mode for complex problems). Our model operated end-to-end in natural language, producing rigorous mathematical proofs directly from the official problem descriptions –… — Demis Hassabis (@demishassabis) July 21, 2025 This year, 630 high school students participated in IMO in Queensland, Australia, with 67 students achieving gold medals, per Reuters. Join top CEOs, founders, and operators at the Level Up conference to unlock strategies for scaling your business, boosting revenue, and building sustainable success.
Yahoo
12 hours ago
- Yahoo
Researchers from top AI labs including Google, OpenAI, and Anthropic warn they may be losing the ability to understand advanced AI models
A group of 40 AI researchers, including contributors from OpenAI, Google DeepMind, Meta, and Anthropic, are sounding the alarm on the growing opacity of advanced AI reasoning models. In a new paper, the authors urge developers to prioritize research into 'chain-of-thought' (CoT) processes, which provide a rare window into how AI systems make decisions. They are warning that as models become more advanced, this visibility could vanish. AI researchers from leading labs are warning that they could soon lose the ability to understand advanced AI reasoning models. In a position paper published last week, 40 researchers, including those from OpenAI, Google DeepMind, Anthropic, and Meta, called for more investigation into AI reasoning models' 'chain-of-thought' process. Dan Hendrycks, an xAI safety advisor, is also listed among the authors. The 'chain-of-thought' process, which is visible in reasoning models such as OpenAI's o1 and DeepSeek's R1, allows users and researchers to monitor an AI model's 'thinking' or 'reasoning' process, illustrating how it decides on an action or answer and providing a certain transparency into the inner workings of advanced models. The researchers said that allowing these AI systems to ''think' in human language offers a unique opportunity for AI safety,' as they can be monitored for the 'intent to misbehave.' However, they warn that there is 'no guarantee that the current degree of visibility will persist' as models continue to advance. The paper highlights that experts don't fully understand why these models use CoT or how long they'll keep doing so. The authors urged AI developers to keep a closer watch on chain-of-thought reasoning, suggesting its traceability could eventually serve as a built-in safety mechanism. 'Like all other known AI oversight methods, CoT [chain-of-thought] monitoring is imperfect and allows some misbehavior to go unnoticed. Nevertheless, it shows promise, and we recommend further research into CoT monitorability and investment in CoT monitoring alongside existing safety methods,' the researchers wrote. 'CoT monitoring presents a valuable addition to safety measures for frontier AI, offering a rare glimpse into how AI agents make decisions. Yet, there is no guarantee that the current degree of visibility will persist. We encourage the research community and frontier AI developers to make the best use of CoT monitorability and study how it can be preserved,' they added. The paper has been endorsed by major figures, including OpenAI co-founder Ilya Sutskever and AI godfather Geoffrey Hinton. Reasoning Models AI reasoning models are a type of AI model designed to simulate or replicate human-like reasoning—such as the ability to draw conclusions, make decisions, or solve problems based on information, logic, or learned patterns. Advancing AI reasoning has been viewed as a key to AI progress among major tech companies, with most now investing in building and scaling these models. OpenAI publicly released a preview of the first AI reasoning model, o1, in September 2024, with competitors like xAI and Google following close behind. However, there are still a lot of questions about how these advanced models are actually working. Some research has suggested that reasoning models may even be misleading users through their chain-of-thought processes. Despite making big leaps in performance over the past year, AI labs still know surprisingly little about how reasoning actually unfolds inside their models. While outputs have improved, the inner workings of advanced models risk becoming increasingly opaque, raising safety and control concerns. This story was originally featured on