logo
AlphaGenome: New Google AI reads DNA mutations, predicts molecular consequences

AlphaGenome: New Google AI reads DNA mutations, predicts molecular consequences

Yahoo2 days ago

In a big leap for genomics, Google on Wednesday unveiled a powerful AI model that predicts how single DNA mutations affect the complex machinery regulating gene activity.
Named AlphaGenome, the tool covers both coding and non-coding regions of the genome, offering a unified view of variant effects like never before.
It brings base-resolution insight to long-range genomic analysis, decoding the impact of mutations with speed, scale, and unprecedented depth.
The model processes up to 1 million base pairs in a single pass and predicts thousands of molecular properties, including gene expression, splicing patterns, protein-binding sites, and chromatin accessibility across diverse cell types.
It's the first time such a wide range of regulatory features can be modeled jointly using one AI system.
AlphaGenome's architecture first uses convolutional layers to spot short patterns in the DNA sequence, then applies transformers to share information across the entire stretch of genetic code. A final set of layers converts these learned patterns into predictions across various genomic features.
During training, all computations for a single sequence are distributed across multiple interconnected Tensor Processing Units (TPUs), enabling efficient large-scale processing.
A single model was trained in just four hours, using half the compute budget required for its predecessor, Enformer.
Built as a successor to Enformer and complementing AlphaMissense, AlphaGenome is the only model that can jointly predict all evaluated molecular modalities, outperforming or matching specialized models in 24 of 26 benchmark tests.
It was trained on massive public datasets including ENCODE, GTEx, 4D Nucleome, and FANTOM5.
Unlike earlier models that traded sequence length for resolution, AlphaGenome handles both with precision. It captures long-range genomic context and offers base-level predictions, unlocking insights across disease biology, rare variant research, synthetic DNA design, and more.
One standout feature of the new model is its variant scoring system, which efficiently contrasts mutated and unmutated DNA to assess impact across modalities.
It also features splice-junction modeling, a first-of-its-kind approach to predicting RNA splicing disruptions tied to diseases like cystic fibrosis and spinal muscular atrophy.
In synthetic biology, AlphaGenome could help design regulatory sequences that activate genes selectively, for instance, in nerve cells but not muscle cells.
The model could also prove useful in studying rare variants with large biological effects, such as those responsible for Mendelian disorders.
In a test case, AlphaGenome accurately predicted how a leukemia-linked mutation introduces a MYB DNA binding motif that activates the TAL1 gene, mirroring known mechanisms in T-cell acute lymphoblastic leukemia and showcasing its power to connect non-coding variants to disease genes.
While AlphaGenome marks a major advance, it's not designed or validated for personal genome interpretation or clinical use. It also faces challenges in modeling very distant regulatory interactions — especially those over 100,000 DNA letters away — and in fully capturing cell- and tissue-specific patterns.
Still, researchers say it lays a strong foundation for future expansion, with potential to be adapted for additional species, modalities, and lab-specific datasets.
AlphaGenome is now available in preview for non-commercial use via the AlphaGenome API. Google is inviting researchers worldwide to explore use cases, ask questions, and share feedback. The AI-powered tool's predictions are intended strictly for research purposes.
'We hope AlphaGenome will help deepen our understanding of the complex cellular processes encoded in DNA,' Google said, 'and drive new discoveries in genomics and healthcare.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Spoilers! Why 'M3GAN 2.0' is actually a 'redemption story'
Spoilers! Why 'M3GAN 2.0' is actually a 'redemption story'

USA Today

time8 minutes ago

  • USA Today

Spoilers! Why 'M3GAN 2.0' is actually a 'redemption story'

Spoiler alert! We're discussing major details about the ending of 'M3GAN 2.0' (in theaters now), so beware if you haven't seen it yet. 'You wouldn't give your child cocaine. Why would you give them a smartphone?' That's the sardonic hypothetical posed by roboticist Gemma (Allison Williams) at the start of 'M3GAN 2.0,' a high-octane sequel to the 2023 hit horror comedy. When the new movie picks up, Gemma is tirelessly advocating for government oversight of artificial intelligence, after creating a bratty, pussy-bowed animatronic named M3GAN that killed four people and a dog in the original film. 'Honestly, Gemma has a point,' jokes Williams, the mother of a 3-year-old, Arlo, with actor Alexander Dreymon. 'Any time my son looks at my screen, I'm like, 'This does feel like the way people react to cocaine. This is not going to be easy to remove from his presence.' ' The first movie was an allegory about parenting and how technology is compromising the emotional human bonds that we share with one another. But in the action-packed follow-up, writer/director Gerard Johnstone wanted to explore the real-life ramifications of having M3GAN-like technology unleashed on the world. 'With the way AI was changing, and the conversation around AI was evolving, it opened up a door narratively to where we could go in the sequel,' Johnstone says. How does 'M3GAN 2.0' end? 'M3GAN 2.0' introduces a new villain in Amelia (Ivanna Sakhno), a weapons-grade automaton built by the U.S. military using M3GAN's stolen programming. But when Amelia goes rogue on a lethal mission for AI to rule the world, Gemma comes to realize that M3GAN is the only one who can stop her. Gemma reluctantly agrees to rebuild her impudent robot in a new body, and the sequel ends with an explosive showdown between Amelia and M3GAN, who nearly dies in a noble attempt to save Gemma and her niece, Cady (Violet McGraw). 'If Amelia walked out of that intact, that's a very different world we're all living in. M3GAN literally saves the world,' Williams says. 'When the first movie ends, you're like, 'Oh, she's a bad seed and I'm glad she's gone.' But by the end of this movie, you have completely different feelings about her. There's a feeling of relief when you realize she's still here, which is indicative of how much ground gets covered in this movie.' M3GAN's willingness to sacrifice herself shows real growth from the deadpanning android that audiences fell in love with two years ago. But Johnstone has always felt 'a strong empathy' towards M3GAN and never wanted to make her an outright villain. Even in the first film, 'everything she does is a result of her programming,' Johnstone says. 'As soon as she does something that Gemma disagrees with, Gemma tries to turn her off, erase her, reprogram her, and effectively kill her. So from that point of view, M3GAN does feel rightly short-changed.' M3GAN's desire to prove herself, and take the moral high ground, is 'what this movie was really about,' Johnstone adds. 'I love redemption stories.' Does 'M3GAN 2.0' set up a third movie? For Williams, part of the appeal of a sequel was getting to play with how M3GAN exists in the world, after her doll exterior was destroyed in the first movie. M3GAN is offscreen for much of this film, with only her voice inhabiting everything from a sports car to a cutesy smart home assistant. 'She's just iterating constantly, which tore through a persona that we've come to know and love,' Williams says. 'It's an extremely cool exercise in a movie like this, where we get to end the movie with a much deeper understanding of who this character is. We've now interacted with her in so many different forms, and yet we still feel the consistency of who she 'is.' That's really the fun of it.' In a way, 'she's like this digital poltergeist that's haunting them from another dimension,' Johnstone adds. 'It was a way to remind people she's more than a doll in a dress – she's an entity.' In the final scene of 'M3GAN 2.0,' we see the character living inside Gemma's computer, in a nostalgic nod to the Microsoft Word paper clip helper. (As millennials, 'our relationship with Clippy was very codependent and very complicated,' Williams quips.) But if there is a third 'M3GAN' movie, it's unlikely that you'll see her trapped in that virtual realm forever. 'M3GAN always needs to maintain a physical form,' Johnstone says. 'One aspect of AI philosophy that we address in this film is this idea of embodiment: If AI is ever going to achieve true consciousness, it has to have a physical form so it can feel anchored. So that's certainly M3GAN's point of view at the beginning of the movie: She feels that if she stays in this formless form for too long, she's going to fragment. 'M3GAN always has to be in a physical body that she recognizes – it's another reason why she won't change her face, even if it draws attention to herself. It's like, 'This is who I am and I'm not changing.' '

Windows has a major AI problem, and it's pushing me closer to Apple
Windows has a major AI problem, and it's pushing me closer to Apple

Digital Trends

time9 minutes ago

  • Digital Trends

Windows has a major AI problem, and it's pushing me closer to Apple

Just over a year ago, Apple Intelligence was announced. It continues to be somewhat of a 'meh' affair compared to other rival products like Microsoft's Copilot and Google's Gemini. What was not 'meh' was the support for Apple's generative AI bundle, which extended all the way back to the M1 silicon introduced in 2020. Even the fresh batch of AI features — such as live translations and intelligent Shortcuts — are fully supported on the machines that will soon be five generations old. I can't say the same about Windows and its AI-powered rebirth with the Copilot package. Before confusion ensues, let me clear things up. Recommended Videos Copilot is a suite of AI features, just like Gemini or Apple Intelligence. Then we have Copilot+ machines, which is a branding for PCs that meet certain hardware-level requirements to enable AI-powered features on Windows laptops and PCs. Here's the weird part. A healthy bunch of Intel silicon launched in 2025 — even those in the powerful 'H' class — don't meet those AI processing requirements. All of it has created a weird kind of divide in the Windows ecosystem where certain advanced AI features are locked to a handful of cheaper machines, even if you paid a much higher price to get a laptop with a far more powerful processor. Oddly, it's not just the hardware, but the software experience that now feels different. Copilot+ is not merely AI hype Before we get into the hardware limitations, let's break down the features. Copilot+ machines require a powerful hardware chip for AI acceleration to enable certain features, down to the OS level. For example, in the Settings app, Microsoft is pushing its own Mu small language model (SML) that runs entirely on the NPU. The NPU on a chip, however, must meet a certain performance baseline, something not even Intel and AMD silicon launched in 2025 fulfill universally. Let's start with the AI-powered Settings app interactions. It can now understand natural language queries and make suggestions so that users can directly take action with a click. If you type something like 'My screen doesn't feel smooth,' the Settings app will show a dialog box underneath the search bar, where you get an actionable button to increase the refresh rate and make the interactions smoother. Apple is chasing something similar and has implemented it within the Spotlight system in macOS Tahoe. Next, we have Recall. It's like a time machine system that takes snapshots of your PC activity in the background and analyzes them contextually. In the future, if you seek to revisit or find something, you can simply type a natural language query and find a record of the activity, complete with a link to the webpage or app you were working with. It almost feels magical, and you can read more about my experience here. The crucial benefit is that a healthy bunch of Copilot+ AI features will run on-device, which means they won't require an internet connection. That's convenient, but in hindsight, it's a huge sigh of relief that all user activity remains locked to your device and nothing is sent to servers. Copilot+ hardware also enables a bunch of creative features such as Cocreator and Generative Fill in Paint, Super Resolution, Image Creator, and Restyle in the native Photos app. But there are a few that are meaningful for day-to-day PC usage. With Click to Do in the Snipping Tool, the AI analyzes the text and image on the screen, somewhat like Google Lens and Apple Intelligence. You can select text, look it up on the web with a single click, send email, open a website, summarize, rewrite, and take a wide range of image actions such as copy, share, visual search in Bing, erase objects, remove background, and do more — without ever opening another app. On the more practical side of things, we have translated Live Captions that cover over 40 languages. The translation and captioning happen in real-time and work during video calls and video watching, too. Finally, we have Windows Studio Effects, which can perform chores such as automatic frame adjustment, portrait lighting tweaks, switch background effects, minimize noise, and even make gaze adjustment. The Copilot+ hardware wall Even if you splurge $4,899 on a Razer Blade 18 with an Intel Core Ultra 9 275HX processor and Nvidia's top-of-the-line GeForce RTX 5090 graphics, your beastly gaming laptop still won't be able to run the Copilot+ features in Windows 11. That's because the NPU on this processor can only manage 13 TOPS, but a pint-sized $800 Microsoft tablet with a Qualcomm Snapdragon X processor can handle all the exclusive Copilot+ features just fine. It's disheartening, because the Copilot+ experiences in Windows 11 are meaningful OS advancements. Most of them, at least. I have used a few of them extensively, and they feel like a practical evolution. Yet, depriving machines that merely miss out on a powerful NPU, despite packing plenty of compute and graphics processing power, is simply unfortunate. Microsoft has laid out tight hardware requirements for machines that can bear the Copilot+ badge — 256GB of storage, 16GB DDR5 RAM, and a processor with a dedicated AI accelerator chip that can output a minimum of 40 TOPS performance. That's a bottleneck from both ends. First, there are still a healthy bunch of machines that ship with 8GB of RAM, and that too, the DDR4 type memory. Take, for example, the Asus Vivobook 17, which costs $700 and ships with 8GB of DDR4 memory on the entry-point configuration, even with the variant that packs a 13th-generation Intel processor. Let's say you pay up to reach 16GB of RAM. Despite that added stress on your wallet, you are still limited by the RAM type and won't be able to run Copilot+ tools on the machine. It's worth mentioning that there are a LOT of Windows machines that still pack 8GB of RAM, and even when they go up to 16GB capacity, they still rely on the DDR4-type memory. Now, it's time to address the elephant in the room. The silicon situation. The latest from Intel is the Ultra 200 series processor family, which is bifurcated across Arrow Lake and Lunar Lake lines. These Ultra 200 series processors are available in four formats: V-series, U-series, H-series, HX-series, and H-series. Out of the four brackets, only the V-series processors support Copilot+ experiences on Windows 11. Even the enthusiast-class H and HX series processors don't meet the NPU requirements, and as such, they are devoid of the Copilot+ AI features. As perplexing as the situation remains with Intel Core 200 series silicon, the situation with AMD and its Copilot+ readiness isn't too different. At the moment, only AMD's Ryzen AI 300 series processors fall under the Copilot+ bracket. That means if you invested in a top-shelf AMD silicon in the past few years, or even aim to build an AMD gaming rig this year, you either lose out on Copilot+ perks or must pick from the Ryzen AI 300 series line-up. Even older Macs do better The situation with Copilot+ is weird because it has created fault lines in the Windows 11 experience that don't make sense, neither from a price perspective, nor from a firepower angle. It even makes one feel bad about spending a fortune on a top-tier Intel processor, only to find it locked beyond next-gen AI features in Windows 11 because the NPU isn't up to the task. The only other option is to pick a Qualcomm Snapdragon X-series processor. But in doing so, you run into the compatibility hurdles that come with Windows on Arm. Plus, the GPU limitations rule out gaming or other demanding tasks where you need a powerful GPU. Right now, it seems like Copilot+ is a bag of serious caveats. And as Microsoft's team comes with more AI-first experiences, the gulf within Windows 11 is only going to widen. An $800 Copilot+ machine will run native AI experiences that even a powerful desktop won't be able to handle in the near future. The situation within the Apple ecosystem is just the opposite. Even if you have a nearly five-year-old M1 MacBook Air, you can run all the Apple Intelligence features just fine. Now, one can argue that AI is not the deciding factor for picking up a laptop. But as companies like Microsoft, Apple, and Google deeply integrate AI packages such as Copilot, Siri, and Gemini across their OS at the native level, these AI features will essentially serve as a key computing evolution. Google has already given us a glimpse of how tightly interweaving Gemini across its Workspace tools can flesh out, and somewhat similar is the progress of Apple Intelligence within maCOS. But when it comes to the OS-level AI progress, it's Microsoft that finds itself in an odd place where a huge chunk of Windows 11 users are going to feel left out, while macOS users will move forward just fine even on aging hardware.

AT&T's $177 Million Settlement Will Pay Victims of Two Huge Data Breaches. Learn Who Qualifies
AT&T's $177 Million Settlement Will Pay Victims of Two Huge Data Breaches. Learn Who Qualifies

CNET

time22 minutes ago

  • CNET

AT&T's $177 Million Settlement Will Pay Victims of Two Huge Data Breaches. Learn Who Qualifies

AT&T's settlement stems from two data breaches inn 2019 and 2024. AT&T/CNET Of the 1,350,835,988 notices sent to subjects of data breaches in 2024, almost a tenth of those came from a hack of AT&T servers in April, according to to the Identity Theft Resource Center's 2024 Annual Data Breach Report. The telecom giant now plans to settle a lawsuit for that breach and another in 2019 for a whopping $177 million. On Friday, June 20, US District Judge Ada Brown granted preliminary approval to the terms of a proposed settlement from AT&T that would resolve two lawsuits related to the data breaches. The current settlement would see AT&T pay $177 million to customers adversely affected by at least one of the two data breaches. The settlement will prioritize larger payments to customers who suffered damages that are "fairly traceable" to the data leaks. It will also provide bigger payments to those impacted by the larger of the two leaks, which began in 2019. While the company is working towards a settlement, it has continued to deny that it was "responsible for these criminal acts." For all the details about we have about the settlement right now, keep reading, and for more info about other recent settlements, find out how to claim Apple's Siri privacy settlement and see if you're eligible for 23andMe's privacy breach settlement. What happened with these AT&T data breaches? AT&T first confirmed the two data breaches last year, announcing an investigation into the first in March before confirming it in May, followed by confirmation of the second one in July. The first of the confirmed breaches began in 2019. The company revealed that around 7.6 million current and 65.4 million former account holders had their data exposed to hackers, including names, Social Security numbers and dates of birth. The company first began investigating the situation last year after it reported that customer data had appeared on the dark web. The second breach began in April of 2024, when a hacker broke into AT&T cloud storage provider Snowflake and accessed 2022 call and text records for almost all of the company's US customers, around 109 million in all. The company stressed that no names were attached to the stolen data, and two individuals were arrested in connection with the breach. Both of these incidents sparked a wave of class action lawsuits alleging corporate neglect on the part of AT&T in failing to sufficiently protect its customers. How will I know if I'm eligible for the AT&T data breach settlement? As of now, we know that the settlement will pay out to any current or former AT&T customer whose data was accessed in one of these data breaches, with higher payments reserved for those who can provide documented proof that they suffered damages directly resulting from their data being stolen. If you're eligible, you should receive a notice about it, either by email or by a physical letter in the mail, sometime in the coming months. The company expects that the claims process will begin on Aug. 4, 2025. How much will the AT&T data breach payments be? You'll have to "reasonably" prove damages caused by these data breaches to be eligible for the highest and most prioritized payouts. For the 2019 breach, those claimants can receive up to $5,000. For the Snowflake breach, the max payout will be $2,500. It's not clear at this time how the company might be handling customers who've been affected by both breaches. AT&T will focus on making those payments first, and whatever's left of the $177 million settlement total will be disbursed to anyone whose data was accessed, even without proof of damages. Since these payouts depend on how many people get the higher amounts first, we can't say definitively how much they will be. When could I get paid from the AT&T data breach settlement? AT&T expects that payments will start to go out sometime in early 2026. Exact dates aren't available right now. The recent court order approving the settlement lists a notification schedule of Aug. 4 to Oct. 17, 2025. The deadline for submitting a claim is currently set at Nov. 18, 2025. The final approval of the settlement needs to be given at a Dec. 3, 2025 court hearing in order for payments to begin. Stay tuned to this piece in the coming months to get all the new details as they emerge, and for more money help, check out CNET's daily tariff price impact tracker.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store