
Google DeepMind's CEO Thinks AI Will Make Humans Less Selfish
Jun 4, 2025 6:00 AM Demis Hassabis says that systems as smart as humans are almost here, and we'll need to radically change how we think and behave.
If you buy that artificial intelligence is a once-in-a-species disruption, then what Demis Hassabis thinks should be of vital interest to you. Hassabis leads the AI charge for Google, arguably the best-equipped of the companies spending many billions of dollars to bring about that upheaval. He's among those powerful leaders gunning to build artificial general intelligence, the technology that will supposedly have machines do everything humans do, but better.
None of his competitors, however, have earned a Nobel Prize and a knighthood for their achievements. Sir Demis is the exception—and he did it all through games. Growing up in London, he was a teenage chess prodigy; at age 13 he ranked second in his age group worldwide. He was also fascinated by complex computer games, both as an elite player and then as a designer and programmer of legendary titles like Theme Park . But his true passion was making computers as smart as wizards like himself. He even left the gaming world to study the brain and got his PhD in cognitive neuroscience in 2009. A year later he began the ultimate hero's quest—a journey to invent artificial general intelligence through the company he cofounded, DeepMind. Google bought the company in 2014, and more recently merged it with a more product-oriented AI group, Google Brain, which Hassabis heads. Among other things, he has used a gamelike approach to solve the scientific problem of predicting the structure of a protein from its amino acid sequence—AlphaFold, the innovation that last year earned him the chemistry Nobel.
Now Hassabis is doubling down on perhaps the biggest game of all—developing AGI in the thick of a brutal competition with other companies and all of China. If that isn't enough, he's also CEO of an Alphabet company called Isomorphic, which aims to exploit the possibilities of AlphaFold and other AI breakthroughs for drug discovery.
When I spoke to Hassabis at Google's New York City headquarters, his answers came as quickly as a chatbot's, crisply parrying every inquiry I could muster with high spirits and a confidence that he and Google are on the right path. Does AI need a big breakthrough before we get to AGI? Yes, but it's in the works! Does leveling up AI court catastrophic perils? Don't worry, AGI itself will save the day! Will it annihilate the job market as it exists today? Probably, but there will always be work for at least a few of us. That—if you can believe it—is optimism. You may not always agree with what Hassabis has to say, but his thoughts and his next moves matter. History, after all, will be written by the winners.
This interview was edited for clarity and concision.
When you founded DeepMind you said it had a 20-year mission to solve intelligence and then use that intelligence to solve everything else. You're 15 years into it—are you on track?
We're pretty much dead on track. In the next five to 10 years, there's maybe a 50 percent chance that we'll have what we define as AGI.
What is that definition, and how do we know we're that close?
There's a debate about definitions of AGI, but we've always thought about it as a system that has the ability to exhibit all the cognitive capabilities we have as humans.
Eric Schmidt, who used to run Google, has said that if China gets AGI first, then we're cooked, because the first one to achieve it will use the technology to grow bigger and bigger leads. You don't buy that?
It's an unknown. That's sometimes called the hard-takeoff scenario, where AGI is extremely fast at coding future versions of themselves. So a slight lead could in a few days suddenly become a chasm. My guess is that it's going to be more of an incremental shift. It'll take a while for the effects of digital intelligence to really impact a lot of real-world things—maybe another decade-plus.
Since the hard-takeoff scenario is possible, does Google believe it's existential to get AGI first?
It's a very intense time in the field, with so many resources going into it, lots of pressures, lots of things that need to be researched. We obviously want all of the brilliant things that these AI systems can do. New cures for diseases, new energy sources, incredible things for humanity. But if the first AI systems are built with the wrong value systems, or they're built unsafely, that could be very bad.
There are at least two risks that I worry a lot about. One is bad actors, whether individuals or rogue nations, repurposing AGI for harmful ends. The second one is the technical risk of AI itself. As AI gets more powerful and agentic, can we make sure the guardrails around it are safe and can't be circumvented.
Only two years ago AI companies, including Google, were saying, 'Please regulate us.' Now, in the US at least, the administration seems less interested in putting regulations on AI than accelerating it so we can beat the Chinese. Are you still asking for regulation?
The idea of smart regulation makes sense. It must be nimble, as the knowledge about the research becomes better and better. It also needs to be international. That's the bigger problem.
If you reach a point where progress has outstripped the ability to make the systems safe, would you take a pause?
I don't think today's systems are posing any sort of existential risk, so it's still theoretical. The geopolitical questions could actually end up being trickier. But given enough time and enough care and thoughtfulness, and using the scientific method …
If the time frame is as tight as you say, we don't have much time for care and thoughtfulness.
We don't have much time. We're increasingly putting resources into security and things like cyber and also research into, you know, controllability and understanding these systems, sometimes called mechanistic interpretability. And then at the same time, we need to also have societal debates about institutional building. How do we want governance to work? How are we going to get international agreement, at least on some basic principles around how these systems are used and deployed and also built?
How much do you think AI is going to change or eliminate people's jobs?
What generally tends to happen is new jobs are created that utilize new tools or technologies and are actually better. We'll see if it's different this time, but for the next few years, we'll have these incredible tools that supercharge our productivity and actually almost make us a little bit superhuman.
If AGI can do everything humans can do, then it would seem that it could do the new jobs too.
There's a lot of things that we won't want to do with a machine. A doctor could be helped by an AI tool, or you could even have an AI kind of doctor. But you wouldn't want a robot nurse—there's something about the human empathy aspect of that care that's particularly humanistic.
Tell me what you envision when you look at our future in 20 years and, according to your prediction, AGI is everywhere?
If everything goes well, then we should be in an era of radical abundance, a kind of golden era. AGI can solve what I call root-node problems in the world—curing terrible diseases, much healthier and longer lifespans, finding new energy sources. If that all happens, then it should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy. I think that will begin to happen in 2030.
I'm skeptical. We have unbelievable abundance in the Western world, but we don't distribute it fairly. As for solving big problems, we don't need answers so much as resolve. We don't need an AGI to tell us how to fix climate change—we know how. But we don't do it.
I agree with that. We've been, as a species, a society, not good at collaborating. Our natural habitats are being destroyed, and it's partly because it would require people to make sacrifices, and people don't want to. But this radical abundance of AI will make things feel like a non-zero-sum game—
AGI would change human behavior?
Yeah. Let me give you a very simple example. Water access is going to be a huge issue, but we have a solution—desalination. It costs a lot of energy, but if there was renewable, free, clean energy [because AI came up with it] from fusion, then suddenly you solve the water access problem. Suddenly it's not a zero-sum game anymore.
If AGI solves those problems, will we become less selfish?
That's what I hope. AGI will give us radical abundance and then—this is where I think we need some great philosophers or social scientists involved—we shift our mindset as a society to non-zero sum.
Do you think having profit-making companies drive this innovation is the right way to go?
Capitalism and the Western democratic systems have so far been proven to be the best drivers of progress. Once you get to the post-AGI stage of radical abundance, new economic theories are required. I'm not sure why economists are not working harder on this.
Whenever I write about AI, I hear from people who are intensely angry about it. It's almost like hearing from artisans displaced by the Industrial Revolution. They feel that AI is being foisted on the public without their approval. Have you experienced that pushback and anger?
I haven't personally seen a lot of that. But I've read and heard a lot about that. It's very understandable. This will be at least as big as the Industrial Revolution, probably a lot bigger. It's scary that things will change.
On the other hand, when I talk to people about why I'm building AI—to advance science and medicine and understanding of the world around us—I can demonstrate it's not just talk. Here's AlphaFold, a Nobel Prize–winning breakthrough that can help with medicine and drug discovery. When they hear that, people say of course we need that, it would be immoral not to have that if it's within our grasp. I would be very worried about our future if I didn't know something as revolutionary as AI was coming, to help with those other challenges. Of course, it's also a challenge itself. But it can actually help with the others if we get it right.
You come from a gaming background—how does that affect what you're doing now?
Some of that training I had when I was a kid, playing chess on an international stage, the pressure was very useful training for the competitive world that we're in.
Game systems seem easier for AI to master because they are bound by rules. We've seen flashes of genius in those arenas—I'm thinking of the surprising moves that AI systems pulled off in various games, like the Hand of God in the Deep Blue chess match, and Move 37 in the AlphaGo match. But the real world is way more complex. Could we expect AI systems to make similar non-intuitive, masterful moves in real life?
That's the dream.
Would they be able to capture the rules of existence?
That's exactly what I'm hoping for from AGI—a new theory of physics. We have no systems that can invent a game like Go today. We can use AI to solve a math problem, maybe even a Millennium Prize problem. But can you have a system come up with something as compelling as the Riemann hypothesis? No. That requires true inventive capability, which I think the systems don't have yet.
It would be mind-blowing if AI was able to crack the code that underpins the universe.
But that's why I started on this. It was my goal from the beginning, when I was a kid.
To solve existence?
Reality. The nature of reality. It's on my Twitter bio: 'Trying to understand the fundamental nature of reality.' It's not there for no reason. That's probably the deepest question of all. We don't know what the nature of time is, or consciousness and reality. I don't understand why people don't think about them more. I mean, this is staring us in the face.
Did you ever take LSD? That's how some people get a glimpse of the nature of reality.
No. I don't want to. I didn't do it like that. I just did it through my gaming and reading a hell of a lot when I was a kid, both science fiction and science. I'm too worried about the effects on the brain, I've done too much neuroscience. I've sort of finely tuned my mind to work in this way. I need it for where I'm going.
This is profound stuff, but you're also charged with leading Google's efforts to compete right now in AI. It seems we're in a game of leapfrog where every few weeks you or a competitor comes out with a new model that claims supremacy according to some obscure benchmark. Is there a giant leap coming to break out of this mode?
We have the deepest research bench. So we're always looking at what we sometimes call internally the next transformer.
Do you have an internal candidate for something that could be a comparable breakthrough to transformers—that could amount to another big jump in performance?
Yeah, we have three or four promising ideas that could mature into as big a leap as that.
If that happens, how would you not repeat the mistakes of the past? It wasn't enough for Google engineers to discover the transformers architecture, as they did in 2017. Because Google didn't press its advantage, OpenAI wound up exploiting it first and kicking off the generative AI boom.
We probably need to learn some lessons from that time, where maybe we were too focused on just pure research. In hindsight we should have not just invented it, but also pushed to productionize it and scale it more quickly. That's certainly what we would plan to do this time around.
Google is one of several companies hoping to offer customers AI agents to perform tasks. Is the critical problem making sure that they don't screw things up when they make some autonomous choice?
The reason all the leading labs are working on agents is because they'll be way more useful as assistants. Today's models are basically passive Q and A systems. But you don't want it to just recommend your restaurant—you'd love it to book that restaurant as well. But yes, it comes with new challenges of keeping the guardrails around those agents, and we're working very hard on the security aspects, to test them prior to putting them on the web.
Will these agents be persistent companions and task-doers?
I have this notion of a universal assistant, right? Eventually, you should have this system that's so useful you're using it all the time, every day. A constant companion or assistant. It knows you well, it knows your preferences, and it enriches your life and makes it more productive.
Help me understand something that was just announced at the I/O developer conference. Google introduced what it calls 'AI Mode' to its search page—when you do a search, you'll be able to get answers from a powerful chatbot. Google already has AI Overviews at the top of search results, so people don't have to click on links as much. It makes me wonder if your company is stepping into a new paradigm where Google fulfills its mission of organizing and accessing the world's information not through traditional search, but in a chat with generative AI. If Gemini can satisfy your questions, why search at all?
There's two clear use cases. When you want to get information really quickly and efficiently and just get some facts right, and then maybe check some sources, you use AI-powered search, as you're seeing with AI Overviews. If you want to do slightly deeper searches, then AI Mode is going to be great for that.
But we've been talking about how our interface with technology will be a continuous dialog with an AI assistant.
Steven, I don't know if you have an assistant. I have a really cool one who has worked with me for 10 years. I don't go to her for all my informational needs. I just use search for that, right?
Your assistant hasn't absorbed all of human knowledge. Gemini aspires to that, so why use search?
All I can tell you is that today, and for the next two or three years, both those modes are going to be growing and necessary. We plan to dominate both.
Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
23 minutes ago
- Forbes
The AI Paradox: When More AI Means Less Impact
Young business man with his face passing through the screen of a laptop on binary code background AI is in the news every day. On the one hand, this highlights the vertiginous speed at which the field is developing. On the other, it creates a sense saturation and angst that makes business organizations either drop the subject altogether or go at it full throttle without much discernment. Both approaches will lead to major misses in the inevitable AI-fication of business. In this article, I'll explore what happens when a business goes down the AI rabbit hole without a clear business objective and a solid grasp of the available alternatives. If you have attended any AI conference lately, chances are that, by the end, you thought your business was dangerously behind. Many of these events, even if not on purpose, can leave you with the feeling that you need to deploy AI everywhere and automate everything to catch up. If you've succumbed to this temptation, you most likely found out that is not the right move. Two years into the generative AI revolution, a counterintuitive truth is emerging from boardrooms to factory floors. Companies pursuing 100% AI automation are often seeing diminished returns, while those treating AI as one element in a broader, human-centered workflow are capturing both cost savings and competitive advantages. The obvious truth is already revealing itself: AI is just one more technology at our disposal, and just like every other new technology, everyone is trying to gain first-move advantage, which inevitably creates chaos. Those who see through and beyond said chaos are building the foundations of a successful AI-assisted business. The numbers tell a story that contradicts the automation evangelists. Three in four workers say AI tools have decreased their productivity and added to their workload, according to a recent UpWork survey of 2,500 respondents across four countries. Workers report spending more time reviewing AI-generated content and learning tool complexities than the time these tools supposedly save. Even more revealing: while 85% of company leaders are pushing workers to use AI, nearly half of employees using AI admitted they have no idea how to achieve the productivity gains their employers expect. This disconnect isn't just corporate misalignment—it's a fundamental misunderstanding of how AI creates value. The companies winning the AI game aren't those deploying the most algorithms. They're the ones who understand that intelligent automation shouldn't rely on AI alone. Instead, successful organizations are orchestrating AI within broader process frameworks where human expertise guides strategic decisions while AI handles specific, well-defined tasks. A good AI strategy always revolves around domain experts, not the other way around. Consider how The New York Times approached AI integration. Rather than replacing journalists with AI, the newspaper introduced AI tools for editing copy, summarizing information, and generating promotional content, while maintaining strict guidelines that AI cannot draft full articles or significantly alter journalism. This measured approach preserves editorial integrity while amplifying human capabilities. AI should be integrated strategically and operationally into entire processes, not deployed as isolated solutions to be indiscriminately exploited hoping for magic. Research shows that 60% of business and IT leaders use over 26 systems in their automation efforts, and 42% cite lack of integration as a major digital transformation hurdle. The most effective AI implementations focus on task-specific applications rather than general automation. Task-specific models offer highly specialized solutions for targeted problems, making them more efficient and cost-effective than general-purpose alternatives. Harvard Business School research involving 750 Boston Consulting Group consultants revealed this precision matters enormously. While consultants using AI completed certain tasks 40% faster with higher quality, they were 19 percentage points less likely to produce correct answers on complex tasks requiring nuanced judgment. This 'jagged technological frontier' demands that organizations implement methodical test-and-learn approaches rather than wholesale AI adoption. Harvard Business Review research confirms that AI notoriously fails at capturing intangible human factors essential for real-world decision-making—ethical considerations, moral judgments, and contextual nuances that guide business success. The companies thriving in 2025 aren't choosing between humans and machines. They're building hybrid systems where AI automation is balanced with human interaction to maintain stakeholder trust and capture value that neither could achieve alone. The mantra, 'AI will replace your job,' seems to consistently reveal a timeless truth: everything that should be automated will be automated, not everything than can be automated will. The Path Forward The AI paradox isn't a failure of technology—it's a lesson in implementation strategy. Organizations that resist the allure of complete automation and instead focus on thoughtful integration, task-specific deployment, and human-AI collaboration aren't just avoiding the productivity trap. They're building sustainable competitive advantages that compound over time. The question isn't whether your organization should use AI. It's whether you'll fall into the 'more AI' trap or master the art of 'smarter AI'—where less automation actually delivers more impact.

Yahoo
28 minutes ago
- Yahoo
The AI lobby plants its flag in Washington
Top artificial intelligence companies are rapidly expanding their lobbying footprint in Washington — and so far, Washington is turning out to be a very soft target. Two privately held AI companies, OpenAI and Anthropic — which once positioned themselves as cautious, research-driven counterweights to aggressive Big Tech firms — are now adding Washington staff, ramping up their lobbying spending and chasing contracts from the estimated $75 billion federal IT budget, a significant portion of which now focuses on AI. They have company. Scale AI, a specialist contractor with the Pentagon and other agencies, is also planning to expand its government relations and lobbying teams, a spokesperson told POLITICO. In late March, the AI-focused chipmaking giant Nvidia registered its first in-house lobbyists. AI lobbyists are 'very visible' and 'very present on the hill,' said Rep. Don Beyer (D-Va.) in an interview at the Special Competitive Studies Project AI+ Expo this week. 'They're nurturing relationships with lots of senators and a handful of members [of the House] in Congress. It's really important for their ambitions, their expectations of the future of AI, to have Congress involved, even if it's only to stop us from doing anything.' This lobbying push aims to capitalize on a wave of support from both the Trump administration and the Republican Congress, both of which have pumped up the AI industry as a linchpin of American competitiveness and a means for shrinking the federal workforce. They don't all present a unified front — Anthropic, in particular, has found itself at odds with conservatives, and on Thursday its CEO Dario Amodei broke with other companies by urging Congress to pass a national transparency standard for AI companies — but so far the AI lobby is broadly getting what it wants. 'The overarching ask is for no regulation or for light-touch regulation, and so far, they've gotten that," said Doug Calidas, senior vice president of government affairs for the AI policy nonprofit Americans for Responsible Innovation. In a sign of lawmakers' deference to industry, the House passed a ten-year freeze on enforcing state and local AI regulation as part of its megabill that is currently working through the Senate. Critics, however, worry that the AI conversation in Washington has become an overly tight loop between companies and their GOP supporters — muting important concerns about the growth of a powerful but hard-to-control technology. 'There's been a huge pivot for [AI companies] as the money has gotten closer,' Gary Marcus, an AI and cognitive science expert, said of the leading AI firms. 'The Trump administration is too chummy with the big tech companies, and basically ignoring what the American people want, which is protection from the many risks of AI.' Anthropic declined to comment for this story, referring POLITICO to its March submission to the AI Action Plan that the White House is crafting after President Donald Trump repealed a sprawling AI executive order issued by the Biden administration. OpenAI, too, declined to comment. This week several AI firms, including OpenAI, co-sponsored the Special Competitive Studies Project's AI+ Expo, an annual Washington trade show that has quickly emerged as a kind of bazaar for companies trying to sell services to the government. (Disclosure: POLITICO was a media partner of the conference.) They're jostling for influence against more established government contractors like Palantir, which has been steadily building up its lobbying presence in D.C. for years, while Meta, Google, Amazon and Microsoft — major tech platforms with AI as part of their pitch — already have dozens of lobbyists in their employ. What the AI lobby wants is a classic Washington twofer: fewer regulations to limit its growth, and more government contracts. The government budget for AI has been growing. Federal agencies across the board — from the Department of Defense and the Department of Energy to the IRS and the Department of Veterans Affairs — are looking to build AI capacity. The Trump administration's staff cuts and automation push is expected to accelerate the demand for private firms to fill the gap with AI. For AI, 'growth' also demands energy and, on the policy front, AI companies have been a key driver of the recent push in Congress and the White House to open up new energy sources, streamline permitting for building new data centers and funnel private investment into the construction of these sites. Late last year, OpenAI released an infrastructure blueprint for the U.S. urging the federal government to prepare for a massive spike in demand for computational infrastructure and energy supply. Among its recommendations: creating special AI zones to fast-track permits for energy and data centers, expanding the national power grid and boosting government support for private investment in major energy projects. Those recommendations are now being very closely echoed by Trump administration figures. Last month, at the Bitcoin 2025 Conference in Las Vegas, David Sacks — Trump's AI and crypto czar — laid out a sweeping vision that mirrored the AI industry's lobbying goals. Speaking to a crowd of 35,000, Sacks stressed the foundational role of energy for both AI and cryptocurrency, saying bluntly: 'You need power.' He applauded President Donald Trump's push to expand domestic oil and gas production, framing it as essential to keeping the U.S. ahead in the global AI and crypto race. This is a huge turnaround from a year ago, when AI companies faced a very different landscape in Washington. The Biden administration, and many congressional Democrats, wanted to regulate the industry to guard against bias, job loss and existential risk. No longer. Since Trump's election, AI has become central to the conversation about global competition with China, with Silicon Valley venture capitalists like Sacks and Marc Andreessen now in positions of influence within the Trump orbit. Trump's director of the Office of Science and Technology Policy is Michael Kratsios, former managing director at Scale AI. Trump himself has proudly announced a series of massive Gulf investment deals in AI. Sacks, in his Las Vegas speech, pointed to those recent deal announcements as evidence of what he called a 'total comprehensive shift' in Washington's approach to emerging technologies. But as the U.S. throws its weight behind AI as a strategic asset, critics warn that the enthusiasm is muffling one of the most important conversations about AI: its ability to wreak unforeseen harm on the populace, from fairness to existential risk concerns. Among those concerns: bias embedded in algorithmic decisions that affect housing, policing, and hiring; surveillance that could threaten civil liberties; the erosion of copyright protections, as AI models hoover up data and labor protections as automation replaces human work. Kevin De Liban, founder of TechTonic Justice, a nonprofit that focuses on the impact of AI on low income communities, worries that Washington has abandoned its concerns for AI's impact on citizens. 'Big Tech gets fat government contracts, a testing ground for their technologies, and a liability-free regulatory environment,' he said, of Washington's current AI policy environment. 'Everyday people are left behind to deal with the fallout.' There's a much larger question, too, which dominated the early AI debate: whether cutting-edge AI systems can be controlled at all. These risks, long documented by researchers, are now taking a back seat in Washington as the conversation turns to economic advantage and global competition. There's also the very real concern that if an AI company does bring up the technology's worst-case scenarios, it may find itself at odds with the White House itself. Anthropic CEO Amodei said in a May interview that labor force disruptions due to AI would be severe — which triggered a direct attack from Sacks, Trump's AI czar, on his podcast, who said that line of thinking led to 'woke AI.' Still, both Anthropic and OpenAI are going full steam ahead. Anthropic hired nearly a dozen policy staffers in the last two months, while OpenAI similarly grew its policy office over the past year. They're also pushing to become more important federal contractors by getting critical FedRAMP authorizations — a federal program that certifies cloud services for use across government — which could unlock billions of dollars in contracts. As tech companies grow increasingly cozy with the government, the political will to regulate them is fading — and in fact, Congress appears hostile to any efforts to regulate them at all. In a public comment in March, OpenAI specifically asked the Trump administration for a voluntary federal framework that overrides state AI laws, seeking 'private sector relief' from a patchwork of state AI bills. Two months later, the House added language to its reconciliation bill that would have done exactly that — and more. The provision to impose a 10 year moratorium on state AI regulations passed the House but is expected to be knocked out by the Senate parliamentarian. (Breaking ranks again, Anthropic is lobbying against the moratorium.) Still, the provision has widespread support amongst Republicans and is likely to make a comeback.
Yahoo
29 minutes ago
- Yahoo
Energy Fuels (UUUU) is Among the Energy Stocks that Gained the Most This Week
The share price of Energy Fuels Inc. (NYSEAMERICAN:UUUU) surged by 10.93% between May 29 and June 5, 2025, putting it among the Energy Stocks that Gained the Most This Week. Let's shed some light on the development. Miners at work in a mine, searching for Uranium and Vanadium. Energy Fuels Inc. (NYSEAMERICAN:UUUU) is a leading US-based critical minerals company, focused on uranium, rare earth elements, heavy mineral sands, vanadium, and medical isotopes. Investors reacted positively this week after Energy Fuels Inc. (NYSEAMERICAN:UUUU) disclosed that it had achieved record monthly uranium production at its Pinyon Plain mine in Arizona, with May's output reaching nearly 260,000 pounds of U3O8. Moreover, the company filed an updated technical report on its Bullfrog project in Utah, significantly increasing previously reported in-ground uranium resources. These developments are especially significant given a recent executive order by President Trump to reinvigorate the American nuclear sector and quadruple the country's nuclear energy capacity. The order also calls for an increase in domestic mining and enrichment of uranium, and a reduction in reliance on imports from Russia and China. While we acknowledge the potential of UUUU as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 10 Cheap Energy Stocks to Buy Now and Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data