
Google DeepMind CEO Demis Hassabis on AI in the Military and What AGI Could Mean for Humanity
Two and a half years later, much has changed in the world of AI. Hassabis, for his part, won a share of the 2024 Nobel Prize in Chemistry for his work on Alphafold—an AI system that can predict the 3D structures of proteins, and which has turbocharged biomedical research. The pace of AI improvement has been so rapid that many researchers, Hassabis among them, now believe human-level AI (known in the industry as Artificial General Intelligence or AGI) will perhaps arrive this decade. In 2022, even acknowledging the possibility of AGI was seen as fringe. But Hassabis has always been a believer. In fact, creating AGI is his life's goal.
Creating AGI will require huge amounts of computing power—infrastructure which only a few tech giants, Google being one of them, possess. That gives Google more leverage over Hassabis than he might like to admit. When Hassabis joined Google, he extracted a pledge from the company: that DeepMind's AI would never be used for military or weapons purposes. But 10 years later, that pledge is no more. Now Google sells its services—including DeepMind's AI—to militaries including those of the United States and, as TIME revealed last year, Israel. So one of the questions I wanted to ask Hassabis, when we sat down for a chat on the occasion of his inclusion in this year's TIME100, was this: did you make a compromise in order to have the chance of achieving your life's goal?
This interview has been condensed and edited for clarity.
AGI, if it's created, will be very impactful. Could you paint the best case scenario for me? What does the world look like if we create AGI?
The reason I've worked on AI and AGI my entire life is because I believe, if it's done properly and responsibly, it will be the most beneficial technology ever invented. So the kinds of things that I think we could be able to use it for, winding forward 10-plus years from now, is potentially curing maybe all diseases with AI, and helping with things like helping develop new energy sources, whether that's fusion or optimal batteries or new materials like new superconductors. I think some of the biggest problems that face us today as a society, whether that's climate or disease, will be helped by AI solutions. So if we went forward 10 years in time, I think the optimistic view of it will be, we'll be in this world of maximum human flourishing, traveling the stars, with all the technologies that AI will help bring about.
Let's take climate, for example. I don't think we're going to solve that in any other way, other than more technology, including AI assisted technologies like new types of energy and so on. I don't think we're going to get collective action together quick enough to do anything about it meaningfully.
Put it another way: I'd be very worried about society today if I didn't know that something as transformative as AI was coming down the line. I firmly believe that. One reason I'm optimistic about where the next 50 years are going to go is because I know that if we build AI correctly, it will be able to help with some of our most pressing problems. It's almost like the cavalry. I think we need the cavalry today.
You've also been quite vocal about the need to avoid the risks. Could you paint the worst-case scenario?
Sure. Well, look, worst case, I think, has been covered a lot in science fiction. I think the two issues I worry about most are: AI is going to be this fantastic technology if used in the right way, but it's a dual purpose technology, and it's going to be unbelievably powerful. So what that means is that would-be bad actors can repurpose that technology for potentially harmful ends. So one big challenge we have as a field and a society is, how do we enable access to these technologies to the good actors to do amazing things like cure terrible diseases, at the same time as restricting access to those same technologies to would-be bad actors, whether that's individuals to all the up to rogue nations? That's a really hard conundrum to solve. The second thing is AGI risk itself. So risk from the technology itself, as it becomes more autonomous, more agent-based, which is what's going to happen over the next few years. How do we ensure that we can stay in charge of those systems, control them, interpret what they're doing, understand them, put the right guardrails in place that are not movable by very highly capable systems that are self improving? That is also an extremely difficult challenge. So those are the two main buckets of risk. If we can get them right, then I think we'll end up in this amazing future.
It's not a worst-case scenario, though. What does the worst-case scenario look like?
Well, I think if you get that wrong, then you've got all these harmful use-cases being done with these systems, and that can range from doing the opposite of what we're trying to do—instead of finding cures, you could end up finding toxins with those same systems. And so all the good use-cases, if you invert the goals of the system, you would get the harmful use-cases. And as a society, this is why I've been in favor of international cooperation. Because the systems, wherever they're built, or however they're built, they can be distributed all around the world. They can affect everyone in pretty much every corner of the world. So we need international standards, I think, around how these systems get built, what designs and goals we give them, and how they're deployed and used.
When Google acquired DeepMind in 2014 you signed a contract that said Google wouldn't use your technology for military purposes. Since then, you've restructured. Now DeepMind tech is sold to various militaries, including the U.S. and Israel. You've talked about the huge upside of developing AGI. Do you feel like you compromised on that front in order to have the opportunity to make that technology?
No, I don't think so. I think we've updated things recently to partly take into account the much bigger geopolitical uncertainties we have around the world. Unfortunately, the world's become a much more dangerous place. I think we can't take for granted anymore democratic values are going to win out—I don't think that's clear at all. There are serious threats. So I think we need to work with governments. And also working with governments allows us to work with other regulated important industries too, like banking, health care and so on. Nothing's changed about our principles. The fundamental thing about our principles has always been: we've got to thoughtfully weigh up the benefits, and they've got to substantially outweigh the risk of harm. So that's a high bar for anything that we might want to do. Of course, we've got to respect international law and human rights—that's all still in there.
And then the other thing that's changed is the widespread availability of this technology, right? So open source, DeepSeek, Llama, whatever, they're maybe not quite as good as the absolute top proprietary models, but they're pretty good. And once it's open source, basically that means the whole world can use it for anything. So I think of that commoditized technology in some senses, and then what's bespoke. And for the bespoke work, we plan to work on things that we are uniquely suited to and best in the world at, like cyber defense and biosecurity—areas where I think it's actually a moral duty for us, I would argue, to help, because we are the best in the world at that. And I think it's very important for the West.
There's a lot of talk in the AI safety world about the degree to which these systems are likely to do things like power-seeking, to be deceptive, to seek to disempower humans and escape their control. Do you have a strong view on whether that's the default path, or is that a tail risk?
My feeling on that is the risks are unknown. So there's a lot of people, my colleagues, famous Turing Award winners on both sides of that argument. I think the right answer is somewhere in the middle, which is, if you look at that debate, there's very smart people on both sides of that debate. So what that tells me is that we don't know enough about it yet to actually quantify the risk. It might turn out that as we develop these systems further, it's way easier to keep control of these systems than we thought, or we expected, hypothetically. Quite a lot of things have turned out like that. So there's some evidence towards the fact that that things may be a little bit easier than some of the most pessimist were thinking, but in my view, there's still significant risk, and we've got to do research carefully to quantify what that risk is, and then deal with it ahead of time with as much foresight as possible, rather than after the fact, which, with technologies this powerful and this transformative, could be extremely risky.
What keeps you up at night?
For me, it's this question of international standards and cooperation, not just between countries, but also between companies and researchers as we get towards the final steps of AGI. And I think we're on the cusp of that. Maybe we're five to 10 years out. Some people say shorter. I wouldn't be surprised. It's like a probability distribution. But either way, it's coming very soon. And I'm not sure society's quite ready for that yet. And we need to think that through, and also think about these issues that I talked about earlier with to do with the controllability of these systems, and also the access to these systems and ensuring that that all goes well.
Do you see yourself more as a scientist, or a technologist? You're far away from Silicon Valley, here in London. How do you identify?
I identify myself as a scientist first and foremost. The whole reason I'm doing everything I've done in my life is in the pursuit of knowledge and trying to understand the world around us. I've been obsessed with that since I was a kid. And for me, building AI is my expression of how to address those questions: to first build a tool—that in itself is pretty fascinating and is a statement about intelligence and consciousness and these things that are already some of the biggest mysteries—and then it can have a dual purpose, because it can also be used as a tool to investigate the natural world around you as well, like chemistry and physics, and biology. What more exciting adventure and pursuit could you have? So, I see myself as a scientist first, and then maybe like an entrepreneur second, mostly because that's the fastest way to do things. And then finally, maybe a technologist-engineer, because in the end, you don't want to just theorize and think about things in a lab. You actually want to make a practical difference in the world.
I want to talk a bit about timelines. Sam Altman and Dario Amodei have both come out recently…
Ultra-short, right?
Altman says he expects AGI within Trump's presidency. And Amodei says it could come as early as 2026.
Look, partially, it depends on your definition of AGI. So I think there's been a lot of watering down of that definition for various reasons, raising money—there's various reasons people might do that. Our definition has been really consistent all the way through: this idea of having all the cognitive capabilities humans have. My test for that, actually, is: could [an AI] have come up with general relativity with the same amount of information that Einstein had in the 1900s? So it's not just about solving a math conjecture; can you come up with a worthy one? So I'm pretty sure we have systems that can solve one of the Millennium Prizes soon. But could you come up with a set of conjectures that are as interesting as that?
It sounds like, in a nutshell, it's the difference that you described between being a scientist and being a technologist. All the technologists are saying: it's a system that can do economically valuable labor better or cheaper than a human.
That's a great way of phrasing it. Maybe that's why I'm so fascinated by that part, because it's the scientists that I've always admired in history, and I think those are the people that actually push knowledge forward—versus making it practically useful. Both are important for society, obviously. Both the engineering and the science part. But I think [existing AI] is missing that hypothesis generation.
Let's get more concrete in terms of specifics. How far away do you think we are from an automated researcher that can contribute meaningfully to AI research?
I think we're a few years away. I think coding assistants are getting pretty good. And by next year, I think they'll be very good. We're pushing hard on that. [Anthropic] focuses mostly on that, whereas, we've been doing more science things. [AI is still] not as good as the best programmers at laying out a beautiful structure for an operating system. I think that part is still missing, and so I think it's a few years away.
You focus quite strongly on multimodality in your Gemini models, and grounding stuff in not just the language space, but in the real world. You focus on that more than the other labs. Why is that?
For several reasons. One, I think true intelligence is going to require an understanding of the spatio-temporal world around you. It's also important for any real science that you want to do. I also thought it would actually make the language models better, and I think we're seeing some of that, because you've actually grounded it in the real world context. Although, actually, language has gone a lot further on its own than some people thought, and maybe I would have thought possible. And then finally, it's a use-case thing too, because I've got two use-cases in mind that we're working on heavily. One is this idea of a universal digital assistant that can help you in your everyday life, to be more productive and enrich your life. One that doesn't just live on your computer, but goes around with you, maybe on your phone or glasses or some other device, and it's super useful all the time. And for that to work, it needs to understand the world around you and process the world around you.
And then secondly, for robotics, it's exactly what you need for real-world robotics to work. It has to understand the spatial context that it's in. [Humans are] multimodal, right? So, we work on screens. We have vision. There's videos that we like to watch, images that we want to create, and audio they want to listen to. So I think an AI system needs to mirror that to interact with us in the fullest possible sense.
Signal president Meredith Whittaker has made quite a significant critique of the universal agent that you've just described there, which is that you're not just getting this assistance out of nowhere. You're giving up a lot of your data in exchange. In order for it to be helpful, you have to give it access to almost everything about your life. Google is a digital advertising company that collects personal information to serve targeted ads. How are you thinking about the privacy implications of agents?
Meredith is right to point that out. I love the work she's doing at Signal. I think first of all, these things would need to all be opt-in.
But we opt into all kinds of stuff. We opt into digital tracking.
So first, it's your choice, but of course, people will do it because it's useful, obviously. I think this will only work if you are totally convinced that that assistant is yours, right? It's got to be trustworthy to you, because for it to be just like a real life human assistant, they're really useful once they know you. My assistants know me better than I know myself, and that's why we work so well as a team together. I think that's the kind of usefulness you'd want from your digital assistant. But then you'd have to be sure it really is siloed away. We have some of the best security people in the world who work on these things to make sure it's privacy-preserving, it's encrypted even on our servers, all of those kinds of technologies. We're working very hard on those so that they're ready for when the assistant stuff, which is called Project Astra for us, is ready for prime time. I think it will be a consumer decision, they'll want to go with systems that are privacy-preserving. And I think edge computing and edge models are going to be very important here too, which is one of the reasons we care so much about small, very performant models too, that can run on a single device.
I don't know how long you think it is before we start seeing major labor market impacts from this stuff. But if or when that happens, it will be massively politically disruptive, right? Do you have a plan for navigating that disruption?
I talk to quite a lot of economists about this. I think first of all, there needs to be more serious work done by experts in the field—economists and others. I'm not sure there is enough work going on in that when I talk to economists. We're building agent systems because they'll be more useful. And then that, I think, will have some impact in jobs too, although I suspect it will enable other jobs, new jobs that don't exist right now, where you're managing a set of agents that are doing the mundane stuff, maybe some the background research, whatever, but you still write the final article, or come up with the final research paper. Or the idea for it. Like, why are you researching those things?
So I think in the next phase there'll be humans super-powered by these amazing tools, assuming you know how to use them, right? So there is going to be disruption, but I think net it will be better, and there'll be better jobs and more fulfilling jobs, and then the more mundane work will go away. That's how it's been with technology in the past. But then AGI, when it can do many many things, then I think it's a question of: can we distribute the productivity gains fairly and widely around the world? And then there's still a question after that, of meaning and purpose. So that's the next philosophical question, which I actually think we need some great new philosophers to be thinking about today.
When I last interviewed you in 2022 we talked a little bit about this, and you said: 'If you're in a world of radical abundance, there should be less room for inequality and less ways that it could come about. So that's one of the positive consequences of the AGI vision if it gets realized.' But in that world, there will still be people who control wealth and people who don't have that wealth, and workers who might not have jobs anymore. It seems like the vision of radical abundance would require a major political revolution to get to the point where that wealth is redistributed. Can you flesh out your vision for how that happens?
I haven't spent a lot of my time personally on this, although probably I increasingly should. And again, I think the top economists should be thinking a lot about this. I feel like radical abundance really means things like you solve fusion and/or optimal batteries and/or superconductors. Let's say you've solved all three of those things with the help of AI. That means energy should [cost] basically zero, and it's clean and renewable, right? And suddenly that means you can have all water access problems go away because you just have desalination plants, and that's fine, because that's just energy and sea water. It also means making rocket fuel is… you just separate hydrogen and oxygen from sea water, using similar techniques, right? So suddenly, a lot of those things that underlie the capitalist world don't really hold anymore, because the base of that is energy costs and resource costs and resource scarcity. But if you've now opened up space and you can mine asteroids and all those things—it'll take decades to build the infrastructure for that—then we should be in this new era economically.
I don't think that addresses the inequality question at all, right? There's still wealth to be gained and amassed by mining those asteroids. Land is finite.
So there's a lot of things that are finite today, which then means it's a zero-sum game in the end. What I'm thinking about is a world where it's not a zero-sum game anymore, at least from a resource perspective. So then there's still other questions [like] do people still want power and other things like that? Probably. So that has to be addressed politically. But at least you've solved one of the major problems, which is, in the end, in a limited-resource world which we're in, things ultimately become zero-sum. It's not the only source, but it's a major source of conflict, and it's a major source of inequality, when you boil it all the way down.
That's what I mean by radical abundance. We no longer, in a meaningful way, are in a zero-sum resource-constrained world. But there probably will need to be a new political philosophy around that, I'm pretty sure.
We got to democracy in the western world, via the Enlightenment, largely because citizens had the power to withhold their labor and threaten to overthrow the state, right? If we do get to AGI it seems like we lose both of those things, and that might be bad for a democracy.
Maybe. I mean, maybe we have to evolve to something else that's better, I don't know. Like, there's some problems with democracy too. It's not a panacea by any means. I think it was Churchill who said that it's the least-worst form of government, something like that. Maybe there's something better. I can tell you what's going to happen technologically. I think if we do this right, we should end up with radical abundance. If we fix a few of the root-node problems, as I call them. And then there's this political philosophy question. I think that is one of the things people are underestimating. I think we're going to need a new philosophy of how to live.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
26 minutes ago
- Tom's Guide
Zuckerberg reveals Meta's AI superintelligence breakthrough — and why you won't be using it anytime soon
Meta is shifting gears in the AI race, claiming its systems are beginning to improve themselves; a potential early step toward artificial superintelligence (ASI). But in the same breath, CEO Mark Zuckerberg says the company will no longer release its most advanced AI models to the public, citing safety concerns. In a newly published policy paper, Zuckerberg revealed that Meta's AI has started refining its own abilities without human input. While the pace is 'slow for now, but undeniable,' he framed the breakthrough as a foundational moment on the path to ASI — AI systems that not only outperform humans in nearly every domain but can also evolve on their own. Researchers often describe ASI as the next rung above artificial general intelligence (AGI), which matches human adaptability. AGI is considered the key milestone before an 'intelligence explosion,' where AI could rapidly improve beyond human control. For years, Meta has touted its open-source approach to AI, making large language models like Llama freely available to researchers and developers. Now, that policy is changing. Zuckerberg says the company will continue releasing competitive models, but the most advanced systems will stay internal to prevent potential misuse. ASI, he warned, introduces 'novel safety concerns' that demand tighter controls, even at the expense of openness. For those unfamiliar with what open source means, it's software that's built on the principle that its source code (the instructions that make it work) is freely available for anyone to view, use and modify. A good example of a completely open source chatbot is DeepSeek. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. The openness allows developers worldwide to collaborate on improvements, fix security flaws, and adapt the software for specific needs. It also promotes transparency, since anyone can inspect the code to understand how it works and ensure it's trustworthy. However, it also comes with safety concerns like the ones Zuckerberg is referencing including guardrails can be removed, it's harder to know who is using the software for harm because there is little to no gatekeeping. Meta's superintelligence ambitions are now housed under a new division: Meta Superintelligence Labs. Launched in June 2025, the group is based in Menlo Park, California, and reportedly oversees development of the ultra-secret 'Behemoth' model. Tech figures Alexandr Wang and Nat Friedman are said to be leading parts of the initiative. Meta's decision puts it at odds with rivals like OpenAI, which still provides limited access to its flagship models through public platforms. The move raises bigger questions for the AI industry: Meta is planting its flag in the superintelligence conversation and it's willing to change its long-standing approach to do it. It's hard to know if this is a necessary safeguard or a bid for exclusive AI dominance, but either way, it will likely define the next chapter of the company's role in the AI race. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


New York Post
26 minutes ago
- New York Post
Elon Musk threatens Apple with legal action, reignites OpenAI feud over alleged antitrust violations
Elon Musk threatened Apple with a lawsuit – and launched a testy back-and-forth on X with OpenAI's Sam Altman as he accused the App Store of violating antitrust laws by unfairly favoring ChatGPT over Musk's Grok chatbot. 'Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation. xAI will take immediate legal action,' Musk wrote in a post on his social media platform X Monday. He added that X and his Grok AI chatbot are excluded from the App Store's 'Must Have' list when the apps rank as #1 in news and #5 overall, respectively – asking if Apple is 'playing politics.' 3 Elon Musk threatened Apple with legal action as he accused its App Store of antitrust violations. Getty Images Apple did not immediately respond to The Post's request for comment. In response to Musk's accusations, Altman wrote in a post: 'This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like.' The war of words quickly devolved into mudslinging over follower counts. 'You got 3M views on your bulls— post, you liar, far more than I've received on many of mine, despite me having 50 times your follower count!' Musk wrote late on Monday. Altman said a 'skill issue' or 'bots' were to blame for Musk receiving fewer views on some of his posts. 'Will you sign an affidavit that you have never directed changes to the X algorithm in a way that has hurt your competitors or helped your own companies?' Altman pressed, adding that he would 'apologize if so.' Musk posted about an hour later: 'Scam Altman lies as easily as he breathes.' Last year, Apple partnered with OpenAI to integrate ChatGPT into its iPhone, iPad, Mac laptop and desktop products. At the time, Musk warned that 'if Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.' It's just the latest in a yearslong feud between Musk and OpenAI, which the billionaire parted ways with in 2018 after co-founding the AI firm in 2015. 3 Elon Musk questioned why X and his Grok AI chatbot are excluded from the App Store's 'Must Have' list. Apple In February 2024, Musk filed a lawsuit against OpenAI and Altman, accusing them of betraying the firm's nonprofit mission by partnering with Microsoft. He withdrew the suit in June but refiled it two months later. Prior to the verbal sparring session, Musk had cheered Grok's performance on the App Store – rising past Google as the fifth top free app. 3 OpenAI chief Sam Altman pressed Elon Musk to sign an affidavit in the pair's latest verbal spat. AP xAI released its latest chatbot version, Grok 4, last month, while OpenAI unveiled GPT-5 on Thursday. Musk's legal threats come soon after Robert Keele, who led the legal department at xAI, announced he left the company to spend time with his family – though he added there was 'daylight between our worldviews' with Musk. OpenAI and xAI did not immediately respond to The Post's requests for comment. Apple is currently facing a landmark lawsuit from the Department of Justice that alleges the company maintains an illegal monopoly over smartphones. And in June, Apple lost a plea to halt changes to its App Store from a ruling that the company could no longer charge a commission on payment links inside its apps.


New York Post
26 minutes ago
- New York Post
YouTube to test AI-powered age verification system for US users
YouTube on Wednesday will begin testing a new age-verification system in the U.S. that relies on artificial intelligence to differentiate between adults and minors, based on the kinds of videos that they have been watching. The tests initially will only affect a sliver of YouTube's audience in the U.S., but it will likely become more pervasive if the system works as well at guessing viewers' ages as it does in other parts of the world. The system will only work when viewers are logged into their accounts, and it will make its age assessments regardless of the birth date a user might have entered upon signing up. If the system flags a logged-in viewer as being under 18, YouTube will impose the normal controls and restrictions that the site already uses as a way to prevent minors from watching videos and engaging in other behavior deemed inappropriate for that age. The new system will only work if viewers are logged into their accounts. AP The safeguards include reminders to take a break from the screen, privacy warnings and restrictions on video recommendations. YouTube, which has been owned by Google for nearly 20 years, also doesn't show ads tailored to individual tastes if a viewer is under 18. If the system has inaccurately called out a viewer as a minor, the mistake can be corrected by showing YouTube a government-issued identification card, a credit card or a selfie. 'YouTube was one of the first platforms to offer experiences designed specifically for young people, and we're proud to again be at the forefront of introducing technology that allows us to deliver safety protections while preserving teen privacy,' James Beser, the video service's director of product management, wrote in a blog post about the age-verification system. People still will be able to watch YouTube videos without logging into an account, but viewing that way triggers an automatic block on some content without proof of age. If a viewer is mistakenly called a minor, they can correct it by showing YouTube a government-issued ID, credit card, or a selfie. fizkes – The political pressure has been building on websites to do a better job of verifying ages to shield children from inappropriate content since late June when the U.S. Supreme Court upheld a Texas law aimed at preventing minors from watching pornography online. While some services, such as YouTube, have been stepping up their efforts to verify users' ages, others have contended that the responsibility should primarily fall upon the two main smartphone app stores run by Apple and Google — a position that those two technology powerhouses have resisted. Some digital rights groups, such as the Electronic Frontier Foundation and the Center for Democracy & Technology, have raised concerns that age verification could infringe on personal privacy and violate First Amendment protections on free speech.