
The Man Who ‘A.G.I.-Pilled' Google
A few years ago, most Google executives didn't talk about A.G.I. — artificial general intelligence, the industry term for a human-level A.I. system. Even if they thought A.G.I. might be technically possible, the idea seemed so remote that it was barely worth discussing.
But this week, at Google's annual developer conference, A.G.I. was in the air. The company announced a slate of new releases tied to Google's Gemini A.I. models, including new features designed to let users write A.I.-generated emails, create A.I.-generated videos and songs, and chat with an A.I. bot on the flagship search engine. Google's leaders traded guesses about when more powerful systems might arrive. And they predicted profound changes ahead, as A.I. tools become more capable and autonomous.
The man most responsible for making Google 'A.G.I.-pilled' — industry shorthand for the way people can become gripped by the notion that A.G.I. is imminent — is Demis Hassabis.
Mr. Hassabis, the chief executive of Google DeepMind, has been dreaming of A.G.I. for years. When he joined Google in 2014 through the acquisition of DeepMind, the artificial intelligence start-up he co-founded, Mr. Hassabis was one of a handful of A.I. leaders taking the possibility of A.G.I. seriously.
Today, he is one of a growing number of tech leaders racing to build A.G.I. — as well as other A.I. products that fall short of general intelligence but are impressive in their own right. Last year, Mr. Hassabis and his Google DeepMind colleague, John M. Jumper, received the Nobel Prize in Chemistry for their work on AlphaFold, an A.I. system capable of predicting the three-dimensional structures of proteins.
This week on 'Hard Fork,' we interviewed Mr. Hassabis about his views on A.G.I. and the strange futures that might follow its arrival. You can listen to our conversation by clicking the 'Play' button below or by following the show on Apple, Spotify, Amazon, YouTube, iHeartRadio or wherever you get your podcasts. Or, if you prefer to read, you'll find an edited transcript of our conversation, which begins about 23 minutes into the podcast, below.
Kevin Roose: So, you just had Google I/O, and it was really the Gemini show. Gemini's name was mentioned something like 95 times in the keynote. Of all the stuff that was announced, what do you think will be the biggest deal for the average user?
Demis Hassabis: We did announce a lot of things. For the average user, I think it's the new powerful models and I hope this Astra-type technology coming into Gemini Live. I think its really magical, actually, when people use it for the first time and they realize that actually A.I. is capable already today of doing much more than what they thought. And then I guess Veo 3 was the biggest announcement of the show probably and seems to be going viral now, and that's pretty exciting as well, I think.
Kevin Roose: One thing that struck me about I/O this year compared to previous years is that it seems like Google is sort of getting A.G.I.-pilled, as they say. I remember interviewing researchers at Google even a couple years ago, and there was a little taboo about talking about A.G.I. They would sort of be like, 'Oh, Demis and his DeepMind people in London, that's their crazy thing that they're excited about. But here we're doing real research.' But now you've got, like, senior Google executives talking openly about it. What explains that shift?
Demis Hassabis: I think the sort of A.I. part of the equation is becoming more and more central, like I sometimes describe Google DeepMind now as the engine room of Google, and I think you saw that probably in the keynote yesterday, really, if you take a step back. And then it's very clear, I think. You could sort of say 'A.G.I.-pilled' is maybe the right word, that we're quite close to this human-level general intelligence, maybe closer than people thought even a couple of years ago. And it's going to have broad, crosscutting impact. And I think that's another thing that you saw at the keynote. It's sort of literally popping up everywhere because it's this horizontal layer that's going to underpin everything, and I think everyone is starting to understand that, and maybe a bit of the DeepMind ethos is bleeding into the general Google, which is great.
Casey Newton: You mentioned [in Tuesday's keynote] that Project Astra is powering some things that maybe people don't even realize that A.I. can do. I think this speaks to a real challenge in the A.I. business right now, which is that the models have these pretty amazing capabilities, but either the products aren't selling them or the users just sort of haven't figured them out yet. So, how are you thinking about that challenge, and how much do you bring yourself to the product question as opposed to the research question?
Demis Hassabis: One of the challenges I think of this space is obviously the underlying tech is moving unbelievably fast, and I think that's quite different even from the other big revolutionary techs: internet and mobile. At some point you get some sort of stabilization of the tech stack so that then the focus can be on product, or exploiting that tech stack. And what we've got here, which I think is very unusual but also quite exciting from a researcher perspective, is the tech stack itself is evolving incredibly fast, as you guys know. So I think that makes it uniquely challenging, actually, on the product side. Not just for us at Google and DeepMind, but for startups — for anyone, really, any company, small and large — the challenge is: What do you bet on right now when that could be 100 percent better in a year, as we've seen. And so you've got this interesting thing where you need fairly deeply technical sort of product people — product designers and managers — in order to intercept where the technology may be in a year. So there's things it can't do today, and you want to design a product that's going to come out in a year, so you've got to have a pretty deep understanding of the tech and where it might go, to work out what features you can rely on. And so it's an interesting one, I think that's what you're seeing: So many different things being tried out, and then if something works, we've got to really double down quickly on that.
Casey Newton: Yeah, during your keynote, you talked about Gemini as powering both productivity/assistant-style stuff and also fundamental science and research challenges. And I wonder, in your mind, is that the same problem that one great model can solve? Or are those very different problems that just require different approaches?
Demis Hassabis: When you look at it, it looks like an incredible breadth of things, which is true, and how are these things related, other than the fact I'm interested in all of them? But that was always the idea with building general intelligence, truly generally and in this way that we're doing; it should be applicable to almost anything: from productivity, which is very exciting, helping billions of people in their everyday lives; to cracking some of the biggest problems in science. Ninety percent, I would say of it, is the underlying core general models — in our case, Gemini, especially 2.5. And in most of these areas, you still need additional applied research, or a little bit of special casing from the domain. Maybe it's special data, or whatever, to tackle that problem. And maybe we work with domain experts in the scientific areas. But underlying it, when you crack one of those areas, you can also put those learnings back into the general model. And then the general model gets better and better. So it's a very interesting flywheel. And it's great fun for someone like me, who's very interested in many things. You get to use this technology and go into almost any field that you find interesting.
Kevin Roose: A thing that a lot of A.I. companies are wrestling with right now is how many resources to devote to the core A.I. push on the foundation models — making the models better at the basic level — versus how much time and energy and money do you spend trying to spin out parts of that and commercialize it and turn it into products. And I imagine this is both a resources challenge but also a personnel challenge. Because — say you join DeepMind as an engineer and you want to build A.G.I., and then someone from Google comes to you and says, we actually want your help building the shopping thing that's going to let people try on clothes. Is that a challenging conversation to have with people who joined for one reason and may be asked to work on something else?
Demis Hassabis: It's sort of self-selecting, internally. There were enough engineers on the product teams that can deal with the product development and prod edg [product engineering]. And the researchers — if they want to stay in core research, that's fine. And we need that. But actually you'll find a lot of researchers are quite motivated by real-world impact, be that in medicine, obviously, and, and things like Isomorphic. But also, to have billions of people use their research, it's actually really motivating. And so, there's plenty of people that like to do both. So there is no need for us to have to pivot people to certain things.
Kevin Roose: You did a panel yesterday [Tuesday] with Sergey Brin, Google's co-founder, who has been working on this stuff back in the office. And interestingly, he has shorter A.G.I. timelines than you. He thought A.G.I. would arrive before 2030, and you said just after. He actually accused you of sandbagging; basically, like artificially pushing out your estimates so that you could underpromise and overdeliver. But I'm curious about that, because you will often hear people at different A.I. companies arguing about when the timelines are, but presumably you and Sergey have access to all the same information and the same road maps, and you understand what's possible and what's not. So what is he seeing that you're not, or vice versa, that leads you to different conclusions about when A.G.I. is going to arrive?
Demis Hassabis: Well, first of all, there wasn't that much difference in our timelines, if he's just before 2030 and I'm just after. Also, my timeline's been pretty consistent since the start of DeepMind in 2010. So we thought it was roughly a 20-year mission, and amazingly we're on track. So it's somewhere around there, I would think. And I feel like between — I actually have obviously a probability distribution of where the most mass of that is between five and 10 years from now. And I think partly it's to do with predicting anything precisely five to 10 years out is very difficult. So there's uncertainty bars around that. And then also there's uncertainty about how many more breakthroughs are required and also about the definition of A.G.I. I have quite a high bar, which I've always had, which is: It should be able to do all of the things that the human brain can do, even theoretically. And so that's a higher bar than, say, what the typical individual human could do, which is obviously very economically important. And that would be a big milestone but not, in my view, enough to call it A.G.I.
And we talked onstage a little bit about what is missing from today's systems: sort of true out-of-the-box invention and thinking, sort of inventing a conjecture rather than just solving a math conjecture. Solving one is pretty good, but actually inventing, like, the Riemann hypothesis or something as significant as that, which mathematicians agree is really important, is much harder. And also consistency: Consistency is a requirement of generality, really. And it should be very, very difficult for even top experts to find flaws, especially trivial flaws, in the systems, which we can easily find today. And the average person can do that. So there's a sort of capabilities gap, and there is a consistency gap before we get to what I would consider A.G.I.
Casey Newton: And when you think about closing that gap, do you think it arrives via incremental 2, 5 percent improvements in each successive model, just kind of stacked up over a long period of time? Or do you think it's more likely that we'll hit some sort of technological breakthrough and then all of a sudden there's liftoff and we hit some sort of intelligence explosion?
Demis Hassabis: I think it could be both, and I think for sure both is going to be useful, which is why we push unbelievably hard on the scaling and what you would call incremental. Although actually there's a lot of innovation even in that, to keep moving that forward in pretraining, posttraining, inference time compute, all of that stack. So there's actually lots of exciting research and we showed some of that with the diffusion model, the Deep Think model. So we're innovating at all parts of the traditional stack, should we call it. And then on top of that, we're doing more green field things, more blue sky things, like AlphaEvolve.
Kevin Roose: Is there a difference between a green field thing and a blue sky thing?
Demis Hassabis: [laughs] I'm not sure. Maybe they're pretty similar.
Kevin Roose: OK.
Demis Hassabis: 'Some new area,' let's call it. And then that could come back into the main branch, right? I've been a fundamental believer in foundational research. We've always had the broadest, deepest research bench, I think, of any lab out there. And that's what allowed us to do past big breakthroughs: obviously transformers, but AlphaGo, AlphaZero, distillation, all of these. And to the extent any of those things are needed again, another big breakthrough of that level, I would back us to do that. And we're pursuing lots of very exciting avenues that could bring that sort of step change, as well as the incremental. And then they, of course, also interact, because the better you have your base models, the more things you can try on top of it. Again, like AlphaEvolve, you add in evolutionary programming, in that case, on top of the LLMs.
Kevin Roose: We recently talked to Karen Hao, who's a journalist who just wrote a book about A.I. And she was making an argument essentially against scale — that you don't need these big general models that are incredibly energy-intensive and compute-intensive and require billions of dollars and new data centers and all kinds of resources. That, instead of doing that kind of thing, you could build smaller models. You could build narrower models. You could have a model like AlphaFold that is just designed to predict the 3-D structures of proteins. You don't need a huge behemoth of a model to accomplish that. What's your response to that?
Demis Hassabis: Well, I think you need those big models. We love big and small models, so you need the big models often to train the smaller models. So we're very proud of our flash models — we call them our workhorse models, which are really efficient and some of the most popular models. We use a ton of those types of models internally. But you can't build those kinds of models without distilling from the larger teacher models. And even things like AlphaFold — obviously, I'm a huge advocate for more of those types of models that can tackle really important problems in science and medicine today; we don't have to wait for A.G.I. And that will require taking the general techniques, but then potentially specializing it, in that case around protein structure prediction. And I think there's huge potential for doing more of those things. And we are — largely in our A.I. for science work — producing something pretty cool on that pretty much every month these days. And I think there should be a lot more exploration on that. Probably a lot of start-ups could be built combining some kind of general model that exists today with some domain specificity. But if you're interested in A.G.I., you've got to push both sides of that. It's not an 'either/or' in my mind. I'm an 'and,' right? Like let's scale, let's look at specialized techniques, let's look at new blue sky research that could deliver the next transformers. We're betting on all of those things.
Casey Newton: You mentioned AlphaEvolve, something that Kevin and I were both really fascinated by. Tell us what AlphaEvolve is.
Demis Hassabis: Well, at a high level, it's basically taking our latest Gemini models, actually two different ones, to generate ideas and hypotheses about programs and other mathematical functions, and then they go into an evolutionary programming process to decide which ones of those are most promising. And then that gets ported into the next step.
Casey Newton: And tell us a little bit about what evolutionary programming is. It sounds very exciting.
Demis Hassabis: Yeah, so it's basically a way for systems to kind of explore new space, right? So like, what things should we, in genetics, mutate to give you a new organism? So you can think about the same way in programming or mathematics: You change the program in some way, and then you compare it to some answer you're trying to get; and then the ones that fit best, according to an evaluation function, you put back into the next set, generating new ideas. And we have our most efficient model generating possibilities, and then we have the pro model critiquing that and deciding which one is most promising to be selected for the next round of evolution.
Kevin Roose: So it's sort of like an autonomous A.I. research organization almost, where you have some A.I.s coming up with hypotheses, other A.I.s testing them and supervising them, and the goal, as I understand it, is to have an A.I. that can kind of improve itself over time or suggest improvements to existing problems.
Demis Hassabis: Yes. It's the beginning of a kind of automated process. It's still not fully automated. And also, it's still relatively narrow. We've applied it to many things, like chip design, scheduling A.I. tasks on our data centers more efficiently, even proving matrix multiplication, [which is] one of the most fundamental units of training algorithms. So it is actually amazingly useful already. But it's still constrained to domains that are provably correct, right, which obviously math and coding are. So we need to fully generalize that.
Casey Newton: But it's interesting because I think for a lot of people, the knock they have on LLMs in general is, well, all you can really give me is the statistical median of your training data. But what you're saying is, We now have a way of going beyond that to potentially generate novel ideas that are actually useful in advancing the state of the art.
Demis Hassabis: That's right. This is another approach, AlphaEvolve, using evolutionary methods, but we really had evidence of that even way back in AlphaGo days. So, AlphaGo came up with new Go strategies, most famously Move 37 in Game 2 of our big Lee Sedol world championship match. And OK, it was limited to a game, but it was a genuinely new strategy that had never been seen before, even though we've played Go for hundreds of years. So, that's when I kicked off our AlphaFold projects and science projects, because I was waiting to see evidence of that spark of creativity or originality, at least within the domain of what we know. But there's still a lot further to go. We know that these kinds of models — paired with things like Monte Carlo Tree Search, reinforcement learning, or planning techniques — can get you to new regions of space to explore. And evolutionary methods is another way of going beyond what the current model knows.
Casey Newton: I've been looking for a good Monte Carlo Tree for so long now, so if you could help me find one, it would honestly be a huge help.
Demis Hassabis: One of these things could probably help.
Casey Newton: OK, great.
Kevin Roose: So I read the AlphaEvolve paper. (Or to be more precise, I fed it into NotebookLM and had it make a podcast that I could then listen to, that would explain it to me at a slightly more elementary level.) And one fascinating thing that stuck out to me is a detail about how you were able to make AlphaEvolve more creative. And one of the ways that you did it was by essentially forcing the model to hallucinate. So many people right now are obsessed with eliminating hallucinations. But it seemed to me like one way to read that paper is that there is actually a scenario in which you want models to hallucinate or be creative — whatever you want to call it.
Demis Hassabis: Well, I think that's right. Hallucination when you want factual things is something you don't want, obviously. But in creative situations — like lateral thinking in an MBA course or something — you just create some crazy ideas, and most of them don't make sense. But the odd one or two may get you to a region of the search space that is actually quite valuable, it turns out, once you evaluate it afterward. And so you can substitute the word 'hallucination' maybe for 'imagination' at that point, right? They're obviously two sides of the same coin.
Kevin Roose: I did talk to one A.I. safety person who was a little bit worried about AlphaEvolve, not because of the actual technology and the experiments, which this person said, you know, they're fascinating, but because of how it was rolled out. So Google DeepMind created AlphaEvolve and then used it to optimize some systems inside Google and kept it sort of hidden for a number of months and only then released it to the public. And this person was saying, Well, if we really are getting to the point where these A.I. systems are starting to become recursively self-improving and they can build a better A.I., doesn't that imply that when Google does build A.G.I. or even super intelligence, that it's going to keep it to itself for a while rather than doing the responsible thing and informing the public?
Demis Hassabis: Well, I think it's a bit of both, actually. First of all, AlphaEvolve is a very nascent self-improvement thing and it's still got humans in the loop, and it's only shaving off — albeit important — percentage points off of already existing tasks. That's valuable, but it's not creating any kind of step changes. And there's a trade-off between carefully evaluating things internally before you release it to the public, out into the wild. And then also getting the extra critique back, which is also very useful, from the academic community and so on. And also we have a lot of trusted tester type of programs, where people get early access to these things and then give us feedback and stress-test them, including sometimes the safety institutes as well.
Kevin Roose: But my understanding was you weren't just red teaming this internally within Google. You were actually using it to make the data centers more efficient, using it to make the kernels that train the A.I. models more efficient. So I guess what this person is saying is: We want to start getting good habits around these things now before they become something like A.G.I. And they were just a little worried that maybe this is going to be something that stays hidden for longer than it needs to. I would love to hear your response to that.
Demis Hassabis: I think that system is not anything really that I would say has any risk on the A.G.I. type of front, and I think today's systems — although very impressive — are not that powerful from any kind of A.G.I. risk standpoint that maybe this person was talking about. And I think you need to have both. You need to have incredibly rigorous internal tests of these things. And then we need to also get collaborative inputs from external. So I think it's a bit of both. I actually don't know the details of the AlphaEvolve process for the first few months; it was just function search before, and then it became more general. So it's evolved itself over the last year in terms of becoming this general purpose tool. And it still has a lot of way to go before we can actually use it in our main branch, which at that point becomes more serious, like with Gemini. It's separate from that currently.
Casey Newton: Let's talk about A.I. safety a little bit more broadly. It's been my observation that it seemed like if the further back in time you go and the less powerful A.I. systems you have, the more everyone seemed to talk about the safety risk. And it seems like now as the models improve, we hear about it less and less, including at the keynote Tuesday. So I'm curious what you make of this moment in A.I. safety if you feel like you're paying enough attention to the risk that could be created by the systems that you have. And if you are as committed to it as you were, say, three or four years ago, when a lot of these outcomes seemed less likely.
Demis Hassabis: Yeah, we're just as committed as we've ever been. From the beginning of DeepMind, we planned for success. So success meant something looking like this, is what we kind of imagined. I mean it's sort of unbelievable still that it's actually happened. But it is in the Overton Window of what we thought was going to happen, if these technologies really did develop the way we thought they were going to. And the risk and attending to, mitigating those risks was part of that. And so we do a huge amount of work on our systems. I think we have very robust, red-teaming processes, both pre- and post-launches. And we've learned a lot, and I think that's the difference now between having these systems, albeit early systems, have contact with the real world. I think I'm sort of persuaded now that that has been a useful thing overall.
And I think five years ago, 10 years ago I may have thought maybe it's better staying in a research lab and, you know, kind of collaborating with academia and that. But actually there's a lot of things you don't get to see or understand unless millions of people try it. So it's this weird trade-off — you can only do it when there's millions of smart people trying your technology and then you find all these edge cases. So however big your testing team is, it's only going to be 100 people or 1,000 people or something. So it's not comparable to tens of millions of people using your systems.
But on the other hand, you want to know as much as possible ahead of time so you can mitigate the risks before they happen. So this is interesting and it's good learning. I think what's happened in the industry in the last two, three years has been great because we've been learning when the systems are not that powerful or risky, as you were saying earlier. I think things are going to get very serious in two, three years' time when these agent systems start becoming really capable. We're only seeing the beginnings of the agent era, let's call it.
But, you can imagine, and hopefully you understood from the keynote, what the ingredients are, what it's going to come together with, and then I think we really need a step change in research on analysis, on understanding, on controllability. But the other key thing is, it's got to be international. That's pretty difficult. And I've been very consistent on that, because it's a technology that's going to affect everyone in the world. It's being built by different countries and different companies in different countries. So you've got to get some international kind of norm, I think, around what we want to use these systems for and what are the kinds of benchmarks that we want to test safety and reliability on.
But there's plenty of work to get on with now. Like, we don't have those benchmarks. We and the industry and academia should be agreeing to a consensus of what those are.
Casey Newton: What role do you want to see export controls play in doing what you just said?
Demis Hassabis: Well, export control is a very complicated issue. And obviously, geopolitics today is extremely complicated. And I see both sides of the arguments on that. There's proliferation, uncontrolled proliferation of these technologies. Do you want different places to have frontier modeling training capability? I'm not sure that's a good idea. But on the other hand, you want Western technology to be to be the thing that's adopted around the world. So it's a complicated trade-off. Like if there was an easy answer, I would be shouting it from the rooftops, but I think it's nuanced like most real world problems are.
Kevin Roose: Do you think we're heading into a bipolar conflict with China over A.I., if we aren't in one already? Just recently, we saw the Trump administration making a big push to make the Middle East — countries in the Gulf, like Saudi Arabia and the UAE — into A.I. powerhouses, have them use American chips to train models that will not be accessible to China and its A.I. powers. Do you see that becoming the foundations of a new global conflict?
Demis Hassabis: Well, I hope not. But I think short-term, I feel like A.I. is getting caught up in the bigger geopolitical shifts that are going on. So I think it's just part of that and it happens to be one of the most topical new things that's appearing. But on the other hand, what I'm hoping is as people, as these technologies get more and more powerful, the world will realize we're all in this together, because we are, and so the last few steps toward A.G.I. — and hopefully we're on the longer timelines actually, more the timelines I'm thinking about — then we get time to get the collaboration we need, at least on a scientific level, before then.
Kevin Roose: Do you feel like you're in the final homestretch to A.G.I.? Sergey Brin, Google's co-founder, had a memo that was reported on by my colleague at The New York Times earlier this year that went out to Google employees and said, we're in sort of the homestretch, and everyone needs to get back to the office and be working all the time, because this is when it really matters. Do you have that sense of finality or entering a new phase or an end game?
Demis Hassabis: I think we are past the middle game, that's for sure, but I've been working every hour there is for the last 20 years because I've felt how important and momentous this technology would be and we've thought it was possible for 20 years and I think it's coming into view now. I agree with that. And whether it's 5 years or 10 years or 2 years, they're all actually quite short timelines when you're discussing the enormity of the transformation that this technology is going to bring. None of those timelines are very long.
Kevin Roose: We're going to switch to more general questions about the A.I. future. A lot of people now are starting to, at least in conversations that I'm involved in, think about what the world might look like after A.G.I. The context in which I actually hear the most about this is from parents who want to know what their kids should be doing and studying; will they go to college? You have kids that are older than my kid. How are you thinking about that?
Demis Hassabis: So I think that when it comes to the kids, and I get asked this quite a lot about university students, is first of all, I wouldn't dramatically change some of the basic advice on STEM, getting good at things like coding. Because I think whatever happens with these A.I. tools, you'll be better off understanding how they work and how they function and what you can do with them. I would also say immerse yourself now; that's what I would be doing as a teenager today in trying to become a sort of ninja using the latest tools. I think you can almost be sort of superhuman in some ways if you got really good at using all the latest, coolest A.I. tools. But don't neglect the basics too because you need the fundamentals. And then I think teach meta skills — learning to learn. The only thing we know for sure is there's going to be a lot of change over the next ten years.
So how does one get ready for that? What kind of skills are useful for that? Creativity skills, adaptability, resilience — I think all of these sort of meta skills will be important for the next generation. And I think it'd be very interesting to see what they do, because they're going to grow up A.I.-native just like the last generation grew up mobile and iPad and tablet native. And then previously, internet and computers, which was my era. And I think, the kids of that era always seem to adapt to make use of the latest, coolest tools. And I think there's more we can do on the A.I. side. If people are going to use the tools for school and education, let's make them really good for that, and provably good. And I'm very excited about bringing it to education in a big way. You know, if you had an A.I. tutor, I want to bring it to poorer parts of the world that don't have good educational systems. So I think there's a lot of upside there too.
Casey Newton: Another thing that kids are doing with A.I. is chatting a lot with digital companions. Google DeepMind doesn't make any of these companions yet. Some of what I've seen so far seems pretty worrying. It seems pretty easy to create a chatbot that just does nothing but tell you how wonderful you are, and that can sort of lead into some dark and weird places. So I'm curious what observations you've had as you look at this market for A.I. companions and whether you think you might want to build this someday or you're going to leave that to other people.
Demis Hassabis: Yeah, I think we've got to be very careful as we start entering that domain and that's why we haven't yet and we've been very thoughtful about that. My view on this is more through the lens of the universal assistant that we talked about yesterday, which is something that's incredibly useful for your everyday productivity: it gets rid of the boring, mundane tasks that we all hate doing to give you more time to do the things that you love doing. I also really hope that they're going to enrich your lives by giving you incredible recommendations, for example, on all sorts of amazing things that you didn't realize you would enjoy — sort of delight you with surprising things. So I think these are the ways I'm hoping that these systems will go.
And actually, on the positive side, I feel like if this assistant becomes really useful and knows you well, you could sort of program it, obviously with natural language, to protect your attention. So you could almost think of it as a system that works for you; you know, as an individual, it's yours. And it protects your attention from being assaulted by other algorithms that want your attention, which is actually nothing to do with A.I. Most social media sites, that's what they're doing effectively, their algorithms are trying to gain your attention. And I think that's actually the worst thing and it'd be great to protect that so we can be more in creative flow or whatever it is that you actually want to do. So I think that's how I would want these systems to be useful to people.
Casey Newton: If you could build a system like that, I think people would be so incredibly happy. I think right now people feel assailed by the algorithms in their life and they don't know what to do about it.
Demis Hassabis: Well, the reason is, is because you've got you've got one brain and you have to, let's say whatever it is, a social media stream, you have dip into that torrent to then get the piece of information you want. But then, you're doing it with the same brain, so you've already affected your mind and your mood and other things by dipping into that torrent to find the valuable piece of information that you wanted. But if an assistant, digital assistant did that for you, you would only get the useful nugget. And you wouldn't need to break your mood or whatever it is you're doing that day or your concentration with your family, whatever it is. I think that would be wonderful.
Kevin Roose: Casey loves that idea, you love that idea, I love this idea of an A.I. agent that protects your attention from all the forces trying to assault it. I'm not sure how the ads team at Google is going to feel about this, but we can ask them when the time comes.
Demis Hassabis: Sure, sure.
Kevin Roose: Some people are starting to look at the job market, especially for recent college graduates and worry that we're already starting to see signs of A.I.-powered job loss. Anecdotally, I talk to young people who a couple years ago, might have been interested in going into fields like tech or consulting or finance or law, who are just saying, 'I don't know that these jobs are going to be around much for longer.' A recent article in The Atlantic wondered if we're starting to see A.I. competing with college graduates for these entry-level positions. Do you have a view on that?
Demis Hassabis: I haven't looked at that; I haven't seen the studies on that. But you know, maybe it's starting to appear now. I don't think there's any hard numbers on that yet, at least I haven't seen it. I think for now I mostly see these as tools that are augmenting what you can do and what you achieve. I mean maybe after A.G.I. things will be different again, but over the next five to ten years I think we're going to find what normally happens with big, new technology shifts, which is that some jobs get disrupted, but then new, more valuable, and usually more interesting jobs get created. So I do think that's what's going to happen in the nearer term. So the next five years, let's say, I think it's very difficult to predict after that. That's part of this more societal change that we need to get ready for.
Kevin Roose: I think the tension there is that you're right, these tools do give people so much more leverage, but they also reduce the need for big teams of people doing certain things. I was talking to someone recently who said they had been at a data science company in their previous job that had 75 people working on some kind of data science tasks. And now they're at a startup that has one person doing the work that used to require 75 people. And so I guess the question I'd be curious to get your view on is: What are the other 74 people supposed to do?
Demis Hassabis: Well, look, I think these tools are going to unlock the ability to create things much more quickly. So I think there will be more people that will do startup things. I mean, there's a lot more surface area one could attack and try with these tools that was possible before. So let's take programming, for example. So obviously, these systems are getting better at coding. But the best coders, I think, are getting differential value out of it, because they still understand how to pose the question and architect the whole code base and check what the coding does. But simultaneously, at the hobbyist end, it's allowing designers and maybe nontechnical people to vibecode some things, whether that's prototyping games or websites or movie ideas. So in theory, it should be those other 70-ish people could be creating new startup ideas; maybe it's going to be less of these bigger teams and more smaller teams that are very empowered by A.I. tools. But that goes back to the education thing: which skills are now important? It might be different skills, like creativity, vision, and design sensibility, could become increasingly important.
Casey Newton: Do you think you'll hire as many engineers next year as you hired this year?
Demis Hassabis: I think so, yeah; there's no plan to hire less. But again, we have to see how fast the coding agents improve. Today, they can't do things on their own. They're just helpful for the best human coders.
Casey Newton: Last time we talked to you, we asked you about some of the more pessimistic views about A.I. in the public. And one of the things you said to us was that the field needed to demonstrate concrete use cases that were just clearly beneficial to people to shift things. My observation is that I think there are even more people now who are actively antagonistic toward A.I., and I think maybe one reason is they hear folks at the big labs saying pretty loudly, 'Eventually this is going to replace your job.' And most people just think, 'Well, I don't want that.' So I'm curious, looking on from that past conversation, if you feel like we have seen enough use cases to start to shift public opinion, or if not, what some of those things might be that actually change views here.
Demis Hassabis: Well, I think we're working on those things. They take time to develop. I think a kind of universal assistant would be one of those things, if it was really yours and working for you effectively — so technology that works for you. I think that this is what economists and other experts should be working on: does everyone have a suite of agents that are doing things for you, and including potentially earning you money or building you things? You know, does that become part of the normal job process? I could imagine that in the next four or five years. I also think that, as we get closer to A.G.I. and we make breakthroughs in material sciences, energy, fusion, these sorts of things, helped by A.I., we should start getting to a position in society where we're getting toward what I would call radical abundance, where there's a lot of resources to go around. And then again, it's more of a political question, of how would you distribute that in a fair way, right? So I've heard this term universal high income. Something like that, I think, is going to probably be good and necessary, but obviously there's a lot of complications that need to be thought through. And there's this transition period, you know, between now and whenever we have that sort of situation. What do we do about the change in the interim? And it depends on how long that period is too.
Kevin Roose: What part of the economy do you think A.G.I. will transform last?
Demis Hassabis: I think the parts of the economy that involve human to human interaction and emotion; those things I think will probably be the hardest things for A.I. to do.
Kevin Roose: But aren't people already doing A.I. therapy and talking with chatbots for things that they might have paid someone, you know, a hundred dollars an hour for?
Demis Hassabis: Well, therapy is a very narrow domain, and there's a lot of, you know, hype about those things. I'm not actually sure how many of those things are really going on in terms of actually affecting the real economy rather than just being more toy things. And I don't think the A.I. systems are capable of doing that properly yet. But just the kind of emotional connection that we get from talking to each other and doing things in nature in the real world, I don't think that A.I. can really replicate all of those things.
Casey Newton: So if you lead hikes, it'd be a good job.
Demis Hassabis: Yeah, I'm going to climb Everest.
Kevin Roose: My intuition on this is that it's going to be some heavily regulated industry where there will just be like a massive pushback on the use of A.I. to displace labor or take people's jobs, like health care or education or something like that. But you think it's going to be an easier lift in those heavily regulated industries.
Demis Hassabis: I don't know, I mean it might be. But then we have to weigh that up as a society — whether we want all the positives. It's not there's no challenges in society other than A.I., but I think A.I. can be a solution to a lot of those other challenges, be that energy resource constraints, aging, disease, water access, climate. There's a ton of problems facing us today, and I think A.I. can potentially help with all of those. And I agree with you, society will need to decide what it wants to use these technologies for. But then, what's also changing is what we discussed earlier with products, is the technology is going to continue advancing, and that will open up new possibilities, like a kind of radical abundance, space travel, these things, which are a little bit out of scope today, unless you read a lot of sci-fi, but I think rapidly becoming real.
Kevin Roose: During the Industrial Revolution, there were lots of people who embraced new technologies, moved from farms to cities to work in the new factories, were sort of early adopters on that curve. But that was also when the Transcendentalists started retreating into nature and rejecting technology. That's when Thoreau went to Walden Pond. There was a big movement of Americans who just saw the new technology and said, 'I don't think so, not for me.' Do you think there'll be a similar movement around rejection of A.I.? And if so, how big do you think it'll be?
Demis Hassabis: I mean, there could be a 'get back to nature.' And I think a lot of people will want to do that. And I think this potentially will give them the room and space to do it, right? If you're in a world of radical abundance, I fully expect that's what a lot of us will want do. I'm thinking about space-faring and maximum human flourishing. I think those will be exactly some of the things that a lot of us will choose to do, and we'll have the time and the space and the resources to do it.
Casey Newton: Are there parts of your life where you say, I'm not going to use A.I. for that, even though it might be pretty good at it for some sort of reason, wanting to protect your creativity or your thought process or something else?
Demis Hassabis: I don't think A.I. is good enough yet to impinge on any of those sort of areas. Mostly I'm using it for things like you did with Notebook LM, which I feel is fine, great — like breaking the ice on a new topic, a scientific topic, and then deciding if I want to get more deep into it. That's one of my main use cases, summarization. I think those are all just helpful. But we'll see. I haven't got any examples of what you suggested yet, but maybe as A.I. gets more powerful there will be.
Kevin Roose: When we talked to Dario Amodei of Anthropic recently, he talked about this feeling of excitement mixed with a kind of melancholy about the progress that A.I. was making in domains where he had spent a lot of time trying to be very good, like coding, where it was like you see a new coding system that comes out, it's better than you, you think that's amazing and then your second thought is like, ooh, that stings a little bit. Have you had any experiences like that?
Demis Hassabis: Of course. So maybe one reason it doesn't sting me as much is I've had that experience when I was very young with chess. So chess was going to be my first career and I was playing pretty professionally when I was a kid for the England junior teams and then Deep Blue came along and clearly the computers were going to be much more powerful than the world champion forever after that. But yet, I still enjoy playing chess. People still do, it's different, you know, but it's a bit like Usain Bolt; we celebrate him for running the 100 meters incredibly fast. We've got cars, but we don't care about that, right? Like we're interested in other humans doing it. And I think that'll be the same with robotic football and all of these other things. And that maybe goes back to what we discussed earlier about how I think in the end we're interested in other human beings. That's why even like a novel, maybe A.I. could write one day a novel that's technically good. But I don't think it would have the same soul or connection to the reader that if you knew it was written by an A.I., at least as far as I can see for now.
Casey Newton: You mentioned robotic football — is that a real thing? We're not sports fans, so I just want to make sure I haven't missed something.
Demis Hassabis: I was meaning soccer. There are RoboCup-sort of soccer. Little robots trying to kick balls and things. I'm not sure how serious it is, but there is a field of robotic football.
Casey Newton: You mentioned that a novel written by a robot might not feel like it can have a soul. I have to say, for as incredible as the technology is in Veo or Imagine, I sort of feel that way with it, where it's like, it's beautiful to look at, but I don't know what to do with it. You know what I mean?
Demis Hassabis: Exactly, and that's why we work with great artists like Darren Aronofsky and Shankar [Mahadevan] on the music. I totally agree with you — these are tools and they can come up with technically good things. And I mean Veo 3 is unbelievable — like I don't know if you've seen some of the things that are going viral being posted at the moment with the voices; actually, I didn't realize how big a difference audio is going to make to the video — I think it just really brings it to life. It's still not, as Darren would say yesterday when we were discussing on an interview, he brings the storytelling. It's not got deep storytelling like a master filmmaker will do or a master novelist at the top of their game. And it might never do, right? It's just always going to feel like something's missing. It's a sort of a soul, for a better word, of the piece, you know? The real humanity, the magic in great pieces of art. When I see a Van Gogh or a Rothko, why does that touch you? You know, hair's going up on the back of my spine? Because I remember, and you know about, what they went through and the struggle to produce that, right? And every brushstroke of Van Gogh's, his sort of torture. And I'm not sure what that would mean, even if the A.I. mimicked that. And so I think that is the piece that, at least as far as I can see, out to five, ten years, the top human creators will always be bringing. And that's why we've done all of our tools — Veo, Lyria — in collaboration with top creative artists.
Kevin Roose: The new Pope, Pope Leo, is reportedly interested in A.G.I. I don't know if he's A.G.I.-pilled or not, but that's something that he's spoken about before. Do you think we will have a religious revival or a renaissance of interest in faith and spirituality in a world where A.G.I. is forcing us to think about what gives our lives meaning?
Demis Hassabis: I think that potentially could be the case and I actually did speak to the last Pope about that and the Vatican's been interested, even prior to this Pope — I haven't spoken to him yet — on these matters. How does A.I. and religion, and technology in general and religion, interact? And what's interesting about the Catholic Church is, and I'm a member of the Pontifical Academy of Sciences, is they've ways had, which is strange for a religious body, a scientific arm, which they like to always say Galileo was the founder of.
Kevin Roose: Didn't go great for him!
Demis Hassabis: And it's actually really separate, and I always thought that was quite interesting and people like Stephen Hawking and, you know, avowed atheists were part of the academy and that's partly why I agreed to join it, because it's a fully scientific body and it's very interesting. I was fascinated that they've been interested in this for 10-plus years, so they were on this early in terms of how interesting, from a philosophical point, this technology will be. And I actually think we need more of that type of thinking and work from philosophers and theologians. So I hope the new pope is genuinely interested.
Kevin Roose: We'll close on a question that I recently heard Tyler Cowen ask Jack Clark from Anthropic that I thought was so good that I decided to just steal it whole cloth: In the ongoing A.I. revolution, what is the worst age to be?
Demis Hassabis: Gosh, I haven't thought about that. But I think any age where you can live to see it is a good age because I think we are going to make some great strides with things like medicine, and so I think it's going to be an incredible journey. None of us know exactly how it's going to transpire, it's very difficult to say, but it's going to be very interesting to find out.
Casey Newton: Try to be young if you can.
Demis Hassabis: Yes, young is always better. In general, young is always better.
Additional Reading:
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


TechCrunch
27 minutes ago
- TechCrunch
Jony Ive's LoveFrom helped design Rivian's first electric bike
Lovefrom, the creative firm founded by former Apple chief designer Jony Ive, played a role in the development of Rivian's first electric bike, according to multiple sources who spoke to TechCrunch. For about 18 months, a handful of LoveFrom staff worked alongside Rivian's design team and engineers within a skunkworks program led by Specialized's former chief product and technology officer Chris Yu. LoveFrom's work on the micromobility project ended in fall 2024, according to the sources. LoveFrom and Rivian declined to comment. Rivian's skunkworks program, which eventually grew to a team of about 70 people hailing from Apple, Google, Specialized, Tesla, REI Co-Op, spun out earlier this year with a new name and $105 million in funding from Eclipse Ventures. The micromobility startup, called Also, has yet to show off its first vehicle designs. In interviews with TechCrunch, Rivian founder and CEO RJ Scaringe (who is on Also's board) and Yu were cagey about what the new company's first vehicle would look like. 'There's a seat, and there's two wheels, there's a screen, and there's a few computers and a battery,' Scaringe said in March. He has also said it will be 'bike-like,' a description confirmed by sources. But both Scaringe and Yu spoke of a much bigger vision for Also, one where it could theoretically tackle almost any imaginable micromobility form factor. The new company is supposed to reveal its first designs at an event later this year. An Also spokesperson declined to comment about its bike or any connection to LoveFrom. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW When the electric 'bike' is revealed, it's possible that Ive's fingerprints will be all over it. Ive is best-known for being the design force behind the iPhone and myriad other Apple products, and most recently, his work with Sam Altman and OpenAI. But his collaboration with Rivian is not his first foray into the transportation industry. The parent company of Ferrari announced in 2021 that Ive's firm would help develop the Italian supercar manufacturer's next-generation vehicles. Ive was also involved with Apple's secretive car project. He was reportedly one of the main proponents for centering Apple's long-running car project around autonomy, whereas other people inside the company pushed for a more traditional electric car. Apple abandoned that project early last year. Sources told TechCrunch that Ive's LoveFrom has acted as a consultant for Rivian in the past, including on the company's redesigned infotainment system and retail, among other areas, according to two former employees with knowledge of the relationship. But its involvement in what would become Also was a more structured and dedicated effort, another source familiar with the relationship said. The skunkworks program began taking shape in early 2022 with a directive to explore whether Rivian's EV technology could be condensed down into something smaller and more affordable than its electric vans, trucks, and SUVs. Initially, the small team worked with Rivian designers to develop a product that could scale to different types of vehicles. A key design challenge was how to make the bike-like product modular while still maintaining the elevated aesthetics Rivian has become known for. By the time LoveFrom got involved in the project in early 2023, a lot of work had been completed, according to sources who said they helped refine the prototypes. The relationship was described as a 'pretty tight' collaboration between the skunkworks team, LoveFrom's staff, and the industrial designers based out of Rivian's headquarters in Irvine. This group looked at everything including the user interface and UX for the bike. The industrial design team at LoveFrom, which has a lot of experience with thoughtful and clever packaging, was particularly involved, according to one source, who noted the team brought an interdisciplinary and international perspective to the project.


Forbes
30 minutes ago
- Forbes
Apple Loop: iPhone 17 Air Questions, F1 Reviews, WWDC Expectations, iPad Pro Details
Taking a look back at this week's news and headlines from Apple, including iPhone 17 display questions, WWDC schedule, iPad Pro details, new WWDC hardware, iOS 26 updates, and the first F1 film reviews. Apple Loop is here to remind you of a few of the many discussions around Apple in the last seven days. You can also read my weekly digest of Android news here on Forbes. A flurry of discussion over the upcoming iPhone 17 and iPhone 17 Air displays started this week. The consensus has been that Apple will finally introduce its ProMotion technology to the base iPhone models, which would allow a variable refresh rate from 120 Hz down to 1Hz. That now looks to be in some doubt. Why would a decision not to introduce ProMotion be an issue? "Well, a bump up to 120Hz would give that smooth scrolling effect, so this is still definitely a step forward. However, ProMotion has a dynamic refresh rate, meaning the iPhone's battery life can be preserved when there's static content on the display, for instance, and the refresh rate is dialed right down to 1Hz. It's this capability which also enables the always-on display that's such a crowd-pleaser on the iPhone 16 Pro and other Pro models, for instance." (Forbes). Next week, we will see the annual Worldwide Developer Conference. Held at Apple's campus, the keynote session will be streamed on multiple platforms. Tim Cook will lead his executive team in a high-level look at Apple's plans for the following year. Forbes' contributor David Phelan looks at the broadcast details and what to expect. "If you're planning to tune in to Apple's World Wide Developers Conference for its keynote next week, you need to know when it's happening and how you can see it. Apple just launched a page on its YouTube channel so you don't miss a thing. Expected are details of new software for the iPhone (iOS 26, not iOS 19, as you might have thought), iPad, Mac, Apple Watch, Apple TV and Apple Vision Pro." (Forbes). One of the expectations at WWDC is he next-generation of Apple Silicon. The M series has typically been used in the Mac platform, but also appears in the iPad Pro form factor. Last year, the iPad Pro debuted the M4 chipset nearly six months ahead of the Mac. The same looks set to happen with the M5, supported by an update to iPadOS to bring it more desktop-like features: "These changes should make the iPad a far more capable 'computer,' for users who want that. But like Apple's previous iPadOS upgrades, the company will undoubtedly still preserve the tablet's simple, one-app-at-a-time UI for users who prefer it. In other words, the iPad's versatility—serving both as a excellent tablet and capable laptop replacement—will be highlighted." (9to5Mac). It's been many years since new hardware debuted at WWDC, so it's unlikely that the M5 iPad Pro will be presented next week. In fact, there's almost nothing in the on deck circle for Tim Cook to pull a One More Things… except perhaps an AirTag? "WWDC is always focused on software, but there are hardware announcements at the conference in some years. Most recently, Apple unveiled the Vision Pro and updated three Mac models at WWDC 2023. In 2024, however, it was a software-only affair." (MacRumors). The modern WWDC is built around Apple's annual update cycle. And at some point, hardware is no longer supported. While many are dropping off thi list this year, it's worth noting that those in the relegation zone may not be getting the full upgrade; '...as happens often from year to year, Apple may technically support a device while still withholding new features from it. For example, many iPhones support iOS 18, but only a handful are compatible with Apple Intelligence. As a result, a whole host of iOS 18's most powerful features aren't actually available on the majority of devices that can run iOS 18." (9to5Mac). The newly renumbered iOS 26 will be the key software update at WWDC. Much of the update is expected to be built around a new user interface to bring all Apple's operating systems closer together in look and operation. How Apple addresses its lack of AI progress since WWC 2024 will also be a notable talking point. As for apps, there are going to be some significant changes in the smaller more specific apps: "And while much of the spotlight will probably shine on the visual overhaul, 9to5Mac has learned that Apple has also been quietly preparing a handful of enhancements to everyday apps like Messages, Music, Notes, and even CarPlay. Some of which could be announced as early as next week." (9to5Mac via Forbes). It's not just a big week for software, it's also a big week for Apple TV as its most ambitious film release arrives. The first reviews of F1—the imaginatively titled movie built around a fictional Formula 1 team—are in. Variety's Zack Sharf gathers up the critics thoughts: "#F1TheMovie is so freaking good. It has all the adrenaline, heart, pacing, story and character that completely fleshes out this movie into excellence. I can only imagine how much MORE I would love this movie if I was a fan of F1 racing! Maybe I am now?" (Via Variety) Apple Loop brings you seven days worth of highlights every weekend here on Forbes. Don't forget to follow me so you don't miss any coverage in the future. Last week's Apple Loop can be read here, or this week's edition of Loop's sister column, Android Circuit, is also available on Forbes.


Android Authority
30 minutes ago
- Android Authority
Gemini now lets you schedule tasks ahead of time
Ryan Haines / Android Authority TL;DR Gemini now lets you automate routine tasks with its new scheduled actions feature. You can use it to schedule prompts to perform a task at a specific time, day, date, or after an event. The feature is available in the Gemini app for users with a Google AI Pro or Ultra subscription and qualifying Google Workspace business and education plans. Google has started rolling out Gemini's scheduled actions feature, which we spotted in a teardown earlier this year. As highlighted in the code strings, the feature allows users to automate routine tasks, similar to the scheduled tasks feature already available in ChatGPT. The scheduled actions feature will be available in the Gemini app starting today. Google says it will let users schedule prompts to perform tasks at a later date, time, or after an event. Users can utilize the feature to automate tasks like getting a summary of their calendar and unread emails every morning, generating five ideas for their blog every Monday, or staying updated on their favorite sports team. In addition to being useful for routine tasks, Gemini's scheduled actions will also come in handy for one-off tasks like getting a summary of an award show the day after it happens. Google adds that Gemini will allow users to transform prompts they're already using into recurring actions or manage existing actions within the new scheduled actions page in settings. Sadly, the scheduled actions feature isn't available for all Gemini users. It is limited to users with a Google AI Pro or Ultra subscription and qualifying Google Workspace business and education plans. Google may eventually make the feature available on Gemini's free tier, but there's no official confirmation yet. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.