logo
The DeepMind CEO's quest for contro​l

The DeepMind CEO's quest for contro​l

One of Google's most powerful leaders is someone who long tried to distance his company from the search giant.
AI mastermind and chess wiz Demis Hassabis, who founded DeepMind and later sold it to Google in 2014, has been propelled into the center of Google's universe in the past two years amid the frenzied race for AI dominance.
After years of trying to create more independence for DeepMind — shielding the AI research lab from the search giant — Hassabis was thrust into the belly of the beast in 2023 when Google merged DeepMind with its internal AI efforts.
Now, company insiders close to Hassabis say he is destined for even greater heights that may give him more control over the powerful AI technology he builds — and possibly one day lead the company.
Whether he's interested in those ambitions is another matter.
For years, Hassabis hoped the AI race would be one led by academics and scientists in research labs, with international organizations to prevent AI's misuse and to map humanity's path through a new era of technological transformation.
Hassabis now faces the pressures of pushing the frontiers of AI and also building it into Google's commercial products. He's wrangling a 6,000-plus team inside Google's AI engine room that has accrued increasingly more power and talent—Google DeepMind just poached the core team of the AI coding startup Windsurf, which rival OpenAI had previously planned to acquire.
The stakes have never been higher: He has to keep Google ahead of corporate rivals and keep the US ahead of China.
"His rise reminds me of Sundar's," said a longtime Google employee who has worked closely with Hassabis and Google CEO Sundar Pichai. "All of a sudden, you started hearing this name Sundar internally, and he kept having more and more responsibility, and all of a sudden, he was the CEO."
"Demis' rise has been similar," the employee added. "Now, all of a sudden, he's responsible for what is probably the most important team at Google right now."
Business Insider spoke to more than a dozen company insiders for this story. They described how Google tapped its ultimate AI weapon in Hassabis, who has the makings of a CEO — and whose ascent within Google has come with trade-offs. Google declined to make Hassabis available for an interview.
Google's AI battle plan
A British-born cognitive neuroscience Ph.D., Hassabis cofounded DeepMind in 2010 with the goal of one day creating a general-purpose artificial intelligence. In 2014, Hassabis sold the company to Google for $650 million with a caveat: It would create an AI ethics board that would limit how Google could use DeepMind's technology.
In the years to come, Hassabis would take this further, fighting to create legal thicker walls between his lab and the search giant, BI previously reported. For one thing, DeepMind's leaders worried that their AI would be used for military purposes.
After ChatGPT became a hit in late 2022, Google leaders ordered several pivots inside the company to address the potential threat to the search giant's business. In DeepMind earlier that year, Hassabis had told staff that he planned to reorient his lab toward building an AI assistant. One former employee said they were confused at the time as to how DeepMind could ever build something that would directly compete with Google's own Assistant.
It soon didn't matter. In 2023, after talks to secure DeepMind's independence failed, Pichai announced the unit would fuse with Google's internal AI lab, and they merged their AI assistant efforts. Google was caught flat-footed by OpenAI's ChatGPT and needed a battle plan. Google DeepMind, a fusion of the company's two premier AI labs, was the answer, and Hassabis is in its driver's seat.
Hassabis has done more managing than ever since. Merging DeepMind with the Brain team from Google Research meant Hassabis had to oversee the merging of two units that historically had behaved more like competitors than colleagues.
Years before the merger, things had gotten so bad between both organizations trying to scoop one another that Hassabis had DeepMind's code locked so it wasn't visible to Google employees. Now, bringing them together meant deciding who should sit under whom, leading to some political infighting, according to multiple people who were there.
DeepMind also used different programming technology than Google — a decision Hassabis made to avoid dependencies on Google teams — and so the merger posed a large technical challenge. A Google spokesperson said these issues had since been resolved.
Thrusting Hassabis into the driver's seat gave him more power but less control. For example, Hassabis has had to change his stance on what were once red lines: In February 2024, Google dropped a pledge made in 2018 not to use its AI for military purposes, following the rest of Silicon Valley. Hassabis cosigned a blog post defending the decision.
"Since we first published our AI Principles in 2018, the AI field has evolved substantially, and we've updated our Principles to more effectively meet the environment of today. Our core commitment remains: benefits must substantially outweigh harms," a Google spokesperson told BI in a statement.
In the merger process, dozens of DeepMind employees left for other labs, pushed out by internal politics and pulled away by a red-hot AI job market. A pullback on publishing certain types of research also rankled some staff, insiders said.
"Some people joined an academic research lab and suddenly they're being asked to build products, and that's not what they wanted," a former DeepMind employee said.
Others were excited by how Google had been shaken out of its slumber, especially when cofounder Sergey Brin returned to the trenches.
The merger has required Hassabis to play a hand in the sorts of Google politicking DeepMind had for a long time avoided to some degree.
AI Studio, Google's AI developer platform, was previously part of Google's Cloud organization and became a point of contention when teams working on Google's other AI cloud platform, Vertex, saw AI Studio as a competitor internally, according to two people familiar with the matter. The AI Studio team moved to Google DeepMind earlier this year, a resolution that Hassabis helped broker, one of the people said.
A spokesperson said that both those products serve different purposes and that DeepMind and Google had historically collaborated.
"There have been many shared views and ways of working between the two teams, and the merger has been very smooth," they said.
Hassabis' new position has some employees wondering if he is being groomed to succeed Pichai and if the CEO chair could give him the level of control over Google's AI he has always wanted.
After all, if Google transforms into the AI-first company Pichai promised in 2016 — the ultimate form of Google, as cofounder Larry Page once saw it — Hassabis is the smartest bet, insiders say.
"My opinion is this is not going to happen," said another person who knows Hassabis. "But I'm less certain about that than I was a year ago."
Others close to Hassabis believe he wouldn't care for such responsibilities. Running Google means overseeing a search business and an advertising empire. It means wrangling a vocal and sometimes unhappy workforce. It means being dragged in front of Congress to answer uncomfortable questions. Above all, it means keeping shareholders happy.
"He does it because he knows the company needs it," said one former executive regarding Hassabis' elevated position running Google DeepMind. "But being CEO would push him away from the things he wants to do. This guy wants to cure cancer."
Who is Demis Hassabis?
For Hassabis, the past two years have punched up an already powerful résumé.
He was a chess prodigy at age 4 and master standard by 13. At 17, he joined the computer game company Bullfrog, where, after completing his studies, he later returned to build a multimillion-selling video game (Theme Park; you might have played it). In 2010, he co-founded DeepMind, one of the world's most powerful and successful AI labs.
In 2024 alone, he oversaw a series of Google AI product launches, earned a Nobel Prize for his work on DeepMind's protein folding project, and even slipped in a knighthood for "services to artificial intelligence," bestowed upon him by King Charles III.
People who have worked with Hassabis describe him as a double threat: He's incredibly smart and can dive deep into technical topics. He also has the charm and charisma to stand in front of thousands of people and make them believe AGI will happen — and that he'll be the one to lead them there.
Devang Agrawal, a former DeepMinder, recalled how he was studying for his undergraduate degree when Hassabis visited his university and gave the students a demo where an AI system used reinforcement learning to play Atari games. Agrawal was sold on the mission, and a few years later, he was working for Hassabis.
"I think Demis has this really brilliant quality about him, to talk in an inspirational way," said Agrawal.
"Demis is very good at convincing you that you and him together are going to change the world," said another former DeepMind employee.
Hassabis can also be fiercely competitive, colleagues and friends say.
When asked about Elon Musk's AI ambitions during an interview at Davos in January, Hassabis remarked, "I think I got him into AI in the first place." In an interview with The New York Times last year, he said of fellow DeepMind founder and now Microsoft competitor Mustafa Suleyman: "Most of what he has learned about AI comes from working with me over all these years."
DeepMind and Google's awkward dance
While he's risen as a corporate leader, people who know him say Hassabis sees himself first as a scientist. His work on AlphaFold, a project for predicting the protein structures of the human body, resulted in a new spin-off company and garnered him his Nobel Prize.
"I want to understand the fundamental nature of reality," he said during a fireside chat at Davos in January.
For him, AI is a tool to do that, and he's most comfortable when discussing the Nobel-winning breakthroughs of protein folding and the potential for AI to eradicate diseases.
"If I could wave a magic wand and create the ideal setup, everyone would be working on more narrow AI tools like AlphaFold, " he said at a recent fireside chat at Queens' College at Cambridge University. "But the technology hasn't turned out that way."
After Google bought DeepMind, the AI lab embarked on an initiative — codenamed "Watermelon" and later called "Mario" — to create thicker walls between its group and its funder. It wanted to protect its valuable creations and prevent them from being used for military or surveillance purposes.
To some employees who spoke to BI, the initiative made no sense. Why would Google spend hundreds of millions of dollars to acquire a company it could not control? Hassabis knew he was racing against the clock, multiple former and current DeepMind employees said. One recalled a presentation Hassabis once gave that finished with a warning that AI would eventually become a race.
"To be one of the people at the table to stop other people from doing bad, you also need to be at the forefront," that person said.
In April 2021, Hassabis gathered his employees to announce that the bid to form a separate legal structure had indeed failed. Hassabis spun it as a win: it meant DeepMind would continue to have access to Google's vast computing power, necessary for building vast AI systems and ultimately, Hassabis hoped, artificial general intelligence — a hypothetical level of machine intelligence that surpasses human cognition across a range of areas.
What DeepMind most lacked was leverage. When Hassabis sold the company to Google in 2014, the AI landscape looked starkly different. Before selling to Google, a person familiar with the matter said Hassabis considered forming a second company to build video games and fund DeepMind's research.
Google's $650 million price tag looks like a bargain when compared to OpenAI's $6.6 billion funding round or the $14.3 billion Meta is spending to invest in Scale AI.
Hassabis was simply too early.
"In 2014, DeepMind didn't have enough money," said another former DeepMind employee "They needed to raise it to work on something that seemed crazy to most people."
Google cofounder Page convinced Hassabis to sell, promising to give DeepMind a large amount of independence. The arrangement allowed DeepMind to continue its research without being part of Google's corporate bureaucracy, while also giving the company access to what would become more valuable than anything else: compute power.
"Whereas today there's enough interest where you can raise enough money without giving away the whole company, so they get to retain control. I can just imagine that's how Demis would prefer things — to be in control," one former employee said.
Demis Hassabis is on the quest for AGI
Hassabis now walks a difficult line between his own interests regarding AGI and those of Google's shareholders. Running Google's control room now means pushing the frontiers of AI research while infusing it into the company's suite of products, from Search to Maps to Workspace. Over the past year, the team has absorbed more parts of Google, including the team for its Gemini chatbot, consolidating more power under Hassabis.
At the same time, Google DeepMind has scaled back the types of papers it publishes to prevent rivals from taking Google's ideas. Research breakthroughs relating to Gemini or the pursuit of AGI now stay within Google's walls, according to multiple people familiar with the strategy — another compromise Hassabis has made as the capitalist engine spins into overdrive.
It's a change in tune from 2016 when Hassabis told Bloomberg Businessweek that work on AI should be carried out in an "open way" and that DeepMind did this by "publishing everything we write."
"I think ultimately the control of this technology should belong to the world, and we need to think about how that's done," he said.
​​"Google DeepMind has always been committed to advancing AI research, and we regularly update our policies to preserve the ability for our teams to publish and contribute to the broader research ecosystem," the Google spokesperson told BI.
Almost a decade later, Hassabis appears to still be wrestling with his concerns, though it's unclear if he has the power to do much about them while running Google's corporate AI engine.
At SXSW London in June, Hassabis again called for international cooperation to better safeguard the transformative technology he sees coming.
In recent years, he has become more vocal about his concerns that the world is not ready for AGI. At I/O in May, he and Brin predicted AGI would arrive around 2030.
People aren't thinking enough about AI's long-term implications, Hassabis has said.
"What's going to happen in the next five, 10-year timescale is going to be monumental," he said at Davos earlier this year, "and I think that's not understood yet."
"But I do think we should be thinking very carefully about these next steps."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.
OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.

Forbes

time2 hours ago

  • Forbes

OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.

As systems like ChatGPT move toward achieving legal privilege, the boundaries between identity, ... More memory, and control are being redefined, often without consent. When OpenAI CEO Sam Altman recently stated that conversations with ChatGPT should one day enjoy legal privilege, similar to those between a patient and a doctor or a client and a lawyer, he wasn't just referring to privacy. He was pointing toward a redefinition of the relationship between people and machines. Legal privilege protects the confidentiality of certain relationships. What's said between a patient and physician, or a client and attorney, is shielded from subpoenas, court disclosures, and adversarial scrutiny. Extending that same protection to AI interactions means treating the machine not as a tool, but as a participant in a privileged exchange. This is more than a policy suggestion. It's a legal and philosophical shift with consequences no one has fully reckoned with. It also comes at a time when the legal system is already being tested. In The New York Times' lawsuit against OpenAI, the paper has asked courts to compel the company to preserve all user prompts, including those the company says are deleted after 30 days. That request is under appeal. Meanwhile, Altman's suggestion that AI chats deserve legal shielding raises the question: if they're protected like therapy sessions, what does that make the system listening on the other side? People are already treating AI like a confidant. According to Common Sense Media, three in four teens have used an AI chatbot, and over half say they trust the advice they receive at least somewhat. Many describe a growing reliance on these systems to process everything from school to relationships. Altman himself has called this emotional over-reliance 'really bad and dangerous.' But it's not just teens. AI is being integrated into therapeutic apps, career coaching tools, HR systems, and even spiritual guidance platforms. In some healthcare environments, AI is being used to draft communications and interpret lab data before a doctor even sees it. These systems are present in decision-making loops, and their presence is being normalized. This is how it begins. First, protect the conversation. Then, protect the system. What starts as a conversation about privacy quickly evolves into a framework centered on rights, autonomy, and standing. We've seen this play out before. In U.S. law, corporations were gradually granted legal personhood, not because they were considered people, but because they acted as consistent legal entities that required protection and responsibility under the law. Over time, personhood became a useful legal fiction. Something similar may now be unfolding with AI—not because it is sentient, but because it interacts with humans in ways that mimic protected relationships. The law adapts to behavior, not just biology. The Legal System Isn't Ready For What ChatGPT Is Proposing There is no global consensus on how to regulate AI memory, consent, or interaction logs. The EU's AI Act introduces transparency mandates, but memory rights are still undefined. In the U.S., state-level data laws conflict, and no federal policy yet addresses what it means to interact with a memory‑enabled AI. (See my recent Forbes piece on why AI regulation is effectively dead—and what businesses need to do instead.) The physical location of a server is not just a technical detail. It's a legal trigger. A conversation stored on a server in California is subject to U.S. law. If it's routed through Frankfurt, it becomes subject to GDPR. When AI systems retain memory, context, and inferred consent, the server location effectively defines sovereignty over the interaction. That has implications for litigation, subpoenas, discovery, and privacy. 'I almost wish they'd go ahead and grant these AI systems legal personhood, as if they were therapists or clergy,' says technology attorney John Kheit. 'Because if they are, then all this passive data collection starts to look a lot like an illegal wiretap, which would thereby give humans privacy rights/protections when interacting with AI. It would also, then, require AI providers to disclose 'other parties to the conversation', i.e., that the provider is a mining party reading the data, and if advertisers are getting at the private conversations.' Infrastructure choices are now geopolitical. They determine how AI systems behave under pressure and what recourse a user has when something goes wrong. And yet, underneath all of this is a deeper motive: monetization. But they won't be the only ones asking questions. Every conversation becomes a four-party exchange: the user, the model, the platform's internal optimization engine, and the advertiser paying for access. It's entirely plausible for a prompt about the Pittsburgh Steelers to return a response that subtly inserts 'Buy Coke' mid-paragraph. Not because it's relevant—but because it's profitable. Recent research shows users are significantly worse at detecting unlabeled advertising when it's embedded inside AI-generated content. Worse, these ads are initially rated as more trustworthy until users discover they are, in fact, ads. At that point, they're also rated as more manipulative. 'In experiential marketing, trust is everything,' says Jeff Boedges, Founder of Soho Experiential. 'You can't fake a relationship, and you can't exploit it without consequence. If AI systems are going to remember us, recommend things to us, or even influence us, we'd better know exactly what they remember and why. Otherwise, it's not personalization. It's manipulation.' Now consider what happens when advertisers gain access to psychographic modeling: 'Which users are most emotionally vulnerable to this type of message?' becomes a viable, queryable prompt. And AI systems don't need to hand over spreadsheets to be valuable. With retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF), the model can shape language in real time based on prior sentiment, clickstream data, and fine-tuned advertiser objectives. This isn't hypothetical—it's how modern adtech already works. At that point, the chatbot isn't a chatbot. It's a simulation environment for influence. It is trained to build trust, then designed to monetize it. Your behavioral patterns become the product. Your emotional response becomes the target for optimization. The business model is clear: black-boxed behavioral insight at scale, delivered through helpful design, hidden from oversight, and nearly impossible to detect. We are entering a phase where machines will be granted protections without personhood, and influence without responsibility. If a user confesses to a crime during a legally privileged AI session, is the platform compelled to report it or remain silent? And who makes that decision? These are not edge cases. They are coming quickly. And they are coming at scale. Why ChatGPT Must Remain A Model—and Why Humans Must Regain Consent As generative AI systems evolve into persistent, adaptive participants in daily life, it becomes more important than ever to reassert a boundary: models must remain models. They cannot assume the legal, ethical, or sovereign status of a person quietly. And the humans generating the data that train these systems must retain explicit rights over their contributions. What we need is a standardized, enforceable system of data contracting, one that allows individuals to knowingly, transparently, and voluntarily contribute data for a limited, mutually agreed-upon window of use. This contract must be clear on scope, duration, value exchange, and termination. And it must treat data ownership as immutable, even during active use. That means: When a contract ends, or if a company violates its terms, the individual's data must, by law, be erased from the model, its training set, and any derivative products. 'Right to be forgotten' must mean what it says. But to be credible, this system must work both ways: This isn't just about ethics. It's about enforceable, mutual accountability. The user experience must be seamless and scalable. The legal backend must be secure. And the result should be a new economic compact—where humans know when they're participating in AI development, and models are kept in their place. ChatGPT Is Changing the Risk Surface. Here's How to Respond. The shift toward AI systems as quasi-participants—not just tools—will reshape legal exposure, data governance, product liability, and customer trust. Whether you're building AI, integrating it into your workflows, or using it to interface with customers, here are five things you should be doing immediately: ChatGPT May Get Privilege. You Should Get the Right to Be Forgotten. This moment isn't just about what AI can do. It's about what your business is letting it do, what it remembers, and who gets access to that memory. Ignore that, and you're not just risking privacy violations, you're risking long-term brand trust and regulatory blowback. At the very least, we need a legal framework that defines how AI memory is governed. Not as a priest, not as a doctor, and not as a partner, but perhaps as a witness. Something that stores information and can be examined when context demands it, with clear boundaries on access, deletion, and use. The public conversation remains focused on privacy. But the fundamental shift is about control. And unless the legal and regulatory frameworks evolve rapidly, the terms of engagement will be set, not by policy or users, but by whoever owns the box. Which is why, in the age of AI, the right to be forgotten may become the most valuable human right we have. Not just because your data could be used against you—but because your identity itself can now be captured, modeled, and monetized in ways that persist beyond your control. Your patterns, preferences, emotional triggers, and psychological fingerprints don't disappear when the session ends. They live on inside a system that never forgets, never sleeps, and never stops optimizing. Without the ability to revoke access to your data, you don't just lose privacy. You lose leverage. You lose the ability to opt out of prediction. You lose control over how you're remembered, represented, and replicated. The right to be forgotten isn't about hiding. It's about sovereignty. And in a world where AI systems like ChatGPT will increasingly shape our choices, our identities, and our outcomes, the ability to walk away may be the last form of freedom that still belongs to you.

Broadcom is no longer the 'poor man's Nvidia' in the AI race
Broadcom is no longer the 'poor man's Nvidia' in the AI race

Yahoo

time2 hours ago

  • Yahoo

Broadcom is no longer the 'poor man's Nvidia' in the AI race

Artificial intelligence (AI) continues to be a key theme of Big Tech earnings, as Alphabet (GOOG, GOOGL) kicked off "Magnificent Seven" earnings with very high additional AI capital expenditure (CapEx), a positive sign for AI chipmakers. Nancy Tengler, CEO and chief investment officer of Laffer Tengler Investments, and Stacy Rasgon, managing director and senior analyst at Bernstein, share their thoughts on two major AI chip players: Nvidia (NVDA) and Broadcom (AVGO). To watch more expert insights and analysis on the latest market action, check out more Market Catalysts here. I think you guys are on the same page when it comes to Nvidia. You've got a buy equivalent rating on it, Stacey. Nancy likes that one. Let's talk about Broadcom for a minute because Nancy, this is one that you've liked for a while. Um, do you still like it? What are you going to be looking for from Broadcom, Nancy, going forward? Yeah, we do, Julie. I mean, it's our largest holding across all of our large cap equity strategies, member of our 12 best, our five for 25. I think, uh, and it's outperformed Nvidia pretty handily over the last year, almost double the returns for Nvidia on a trailing one year. We've always called it the poor man's Nvidia. I think we're going to have to come up with a new name. But one of the things that we're going to be paying attention to is, of course, um, the AI revenues. We we've we've seen those compound at 60 plus percent. They've announced new partnerships. We want to hear more about that. Um, it seems that the rest of the business, the rest of the chip business may have bottomed. We'd like to hear some some information and and confirmation about that. And then I just think, you know, it's just going to be about the future guidance. And Hock Tan has demonstrated he can acquire companies, make them accretive quickly. We bought the stock when, uh, it sold off on the computer associates that used to be the name of the company they acquired. Wall Street didn't like it. They turned it around, made it a very positive acquisition. So we we'll be listening for that, too. Are there any acquisitions they're they're planning to make? And I certainly hope one of them is not Intel. Yeah, that would be something. Uh, Stacey, um, along with Nvidia, is Broadcom sort of, are those the sort of must-owns in the chip space? Yeah, I frankly, in in chips, it hasn't been great outside of AI, right? I mean, AI's been super strong. The analog, more diversified guys, like the people who were playing those on cyclical recovery, some of those prints so far this earning season have not not been so great. There's worries about pull forward and everything else. Again, you got companies like Intel which which frankly are a basket case. I mean, if it wasn't for AI, this space would not be doing very well. So I I do like the AI names. We cover Nvidia and Broadcom. I like them both. Um, Broadcom is is just more expensive than it used to be. That's the only only, you know, it was Right. I guess hence Nancy's comment that they're going to have to rename it from the poor man's video. Yeah, and look, you know, you got to remember Broadcom like not all that long ago was like 16 times earnings, like now it's like the multiple's like like doubled, right? Um, they are showing a lot of AI upside. A lot of that comes next year, but they they're clearly, I mean even the last earnings call a couple of months ago, they're clearly calling for upside in their AI revenues next year on more inference demand. They're a massive player on AI networking, right? So there's there there's a a big play there. And and and Nancy, I think, is right, they have the core semi business which admittedly has been lousy. They're not the only ones. Everybody in though in those kinds of markets has has been lousy. It it doesn't look like getting any worse at least. I we can we can argue about when it's going to start getting better. I don't know yet, but at least it isn't getting worse. Um, you know, if you're looking in in into the near term, I mean you could argue, again, we we like both stocks. Nvidia is cheaper. And you know, you know, they just they just got their China licenses, um, reinstated, so there's probably more upside to their AI numbers this year for Nvidia versus Broadcom. I think the Broadcom AI upside comes next year and Broadcom's a little more expensive. Um, and then there's a whole ASIC versus, you know, GPU debate. But I I think you can own them both. Like I I I like them both. And again, AI is the only thing in semis right now that fundamentally is really working.

1 Super Artificial Intelligence (AI) Stock Billionaire Bill Gates Has 25% of His Foundation's Portfolio Invested In
1 Super Artificial Intelligence (AI) Stock Billionaire Bill Gates Has 25% of His Foundation's Portfolio Invested In

Yahoo

time2 hours ago

  • Yahoo

1 Super Artificial Intelligence (AI) Stock Billionaire Bill Gates Has 25% of His Foundation's Portfolio Invested In

Key Points The Gates Foundation holds a substantial number of Microsoft shares. Microsoft has become a top player in offering artificial intelligence models. However, the stock is starting to appear somewhat pricey compared to its peers. 10 stocks we like better than Microsoft › Bill Gates is a well-known entrepreneur, having co-founded Microsoft (NASDAQ: MSFT) in the mid-1970s. This made him a fortune, and he constantly ranks among the richest people in the world. He established the Gates Foundation Trust, one of the world's most well-funded foundations. By examining its holdings, investors can gain insight into what one of the world's brightest minds considers top stock picks, and they've identified an AI stock that has been a stellar performer in recent years. In fact, the stock has more than doubled since the start of 2023 alone. What is this stock? It's none other than Microsoft. Microsoft is the foundation's top holding This really shouldn't come as a surprise to anyone. Bill Gates runs the fund, so he will fill it with a company that he thinks will succeed. Most of this stock was donated from Gates' wealth; however, if the foundation didn't think Microsoft was set to succeed, they would have sold it a long time ago and moved on to something else. About 25% of the foundation's worth is tied up in Microsoft stock, valued at around $10.7 billion. That's a concentrated bet for a charitable foundation, but it has worked out well with Microsoft's recent success. Microsoft has emerged as a top AI pick due to its role as a facilitator in the space. It isn't developing its own generative AI model; instead, it's offering many of the leading ones on its cloud computing platform, Azure. Developers can choose from OpenAI's ChatGPT, a leading option, Meta Platforms' Llama, DeepSeek's R1 (a more affordable alternative from China), or xAI's Grok, a company founded by Elon Musk. By offering a wide range of generative AI models, Microsoft isn't locking its clients into a single provider. This has made Azure a top choice for building AI models on, which is why it has outgrown its peers in recent quarters. We'll get an update on how the other cloud computing providers -- namely Alphabet's Google Cloud and Amazon's Amazon Web Services (AWS) -- in the next few weeks, but I'd be shocked if Azure isn't growing quicker than they are. Azure has become a top platform for building AI applications, but has it done enough to make Microsoft a top buy now? Microsoft's stock is starting to look a bit pricey for its growth If Microsoft derived all of its revenue from Azure, I'd be a buyer at nearly any price. However, Microsoft has other product lines that aren't growing as quickly, which slows the company's overall growth pace. In its latest period -- the third quarter of fiscal 2025 -- overall revenue rose to $70.1 billion at a 13% pace. While Microsoft doesn't break out the revenue generated by Azure, we know from prior information that it accounts for over half of the Intelligent Cloud division, which brought in $26.8 billion during Q3 (ending March 31). They do provide Azure's growth rate, which was Microsoft's top-performing division in Q3, rising 33% year over year. Microsoft's diluted earnings per share also rose an impressive 18%, but is that fast enough to justify its valuation? Microsoft trades at nearly 40 times trailing earnings, which is a very expensive price tag and exceeds its recent highs reached during the AI arms race period. Wall Street analysts project $15.14 in earnings per share for fiscal 2026 (ending June 30, 2026), which indicates the stock trades at 33.7 times forward earnings. That's still a high valuation, and investors need to start being a bit cautious when stocks reach that level, especially when they're growing at Microsoft's pace. Yes, Microsoft is growing faster than the market, but it's not growing as fast as some of its peers. Take Meta Platforms, for example. It trades at 28 times trailing earnings and grew revenue at a 16% pace during its last quarter with 36% earnings-per-share growth. That's a cheaper stock growing faster, which should cause Microsoft investors to question whether it's the best big tech stock to be in right now. Numerous other big tech stocks have better growth numbers and cheaper valuations than Microsoft. Although it's a dominant company, it's starting to look a bit expensive compared to its peers. Should you buy stock in Microsoft right now? Before you buy stock in Microsoft, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Microsoft wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $636,628!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,063,471!* Now, it's worth noting Stock Advisor's total average return is 1,041% — a market-crushing outperformance compared to 183% for the S&P 500. Don't miss out on the latest top 10 list, available when you join Stock Advisor. See the 10 stocks » *Stock Advisor returns as of July 21, 2025 Keithen Drury has positions in Alphabet, Amazon, and Meta Platforms. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, and Microsoft. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. 1 Super Artificial Intelligence (AI) Stock Billionaire Bill Gates Has 25% of His Foundation's Portfolio Invested In was originally published by The Motley Fool Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store