
Google faces loss of Chrome as Perplexity bid adds drama to looming breakup decision
Even if analysts aren't taking the offer very seriously, Perplexity's move marks a turning point. It's the first time an outside party has made such a public and specific effort to strip out a key piece of Google, which is currently awaiting a judge's decision on whether it must take significant divestiture steps following a ruling last year that the company has held a monopoly in its core search market.
The ruling was widely viewed as the most important antitrust decision in the tech industry since the case against Microsoft more than two decades ago. The U.S. Department of Justice, which filed the landmark case against Google in 2020, indicated after its victory in court that it was considering a possible breakup of Google as an antitrust remedy.
Soon after that, the DOJ explicitly called for Google to divest Chrome to create a more equal playing field for search competitors. As is, Google bundles search and other services into Chrome and preinstalls the browser on Chromebooks. Google Legal Chief Kent Walker said in response to the DOJ that its "approach would result in unprecedented government overreach" and would harm the country's effort to maintain economic and tech leadership.
With the remedies decision expected this month, investors have a lot to consider regarding the future value of Google and parent Alphabet. The company is shelling out tens of billions of dollars a year on artificial intelligence infrastructure and AI services while facing the risk that consumers will be spending a lot less time on traditional search as ChatGPT and other AI-powered alternatives provide new ways to access information.
But while Alphabet still counts on search-related ads for the majority of its revenue, the company has been diversifying over the past decade. October will mark 10 years since the creation of Alphabet as a holding company, with Google as its prime subsidiary.
"This new structure will allow us to keep tremendous focus on the extraordinary opportunities we have inside of Google," co-founder Larry Page said in a blog post at the time.
Page moved from CEO of Google to become chief executive of Alphabet, promoting Sundar Pichai, who had been a senior vice president in charge of internet businesses, to run Google. Four years later, Pichai replaced page as Alphabet CEO.
On Pichai's watch, Alphabet's market cap has jumped more than 150% to $2.5 trillion. With an increasingly dominant position on the internet, Pichai and team have had to continue looking for growth areas, particularly in AI, while simultaneously fending off an aggressive set of regulators in the U.S. and Europe.
Analysts have taken the opportunity to place estimated values on Alphabet's various businesses, partly in the event that the company is ever forced into drastic measures. Some have even suggested it could be a good thing for shareholders.
"We believe the only way forward for Alphabet is a complete breakup that would allow investors to own the business they actually want," analysts at D.A. Davidson have written in a series of notes this year.
Alphabet didn't respond to a request for comment.
Here's a breakdown of how some analysts value Alphabet's top non-search assets:
The browser is key to Alphabet's ad business, which uses data from Chrome to help with targeted advertisements. Google originally launched Chrome in 2008 as an effort to "add value for users and, at the same time, help drive innovation on the web."
Perplexity's offer doesn't stack up to analyst estimates, but it's still much higher than Perplexity's own valuation, which reached $18 billion in July. Perplexity, which is best known for its AI-powered search engine that gives users simple answers to inquiries, said investors are on board to foot the bill. However, the company didn't name the prospective backers.
Barclays analysts called the possibility of a Chrome divestiture a "black swan" risk, warning of a potential 15% to 25% drop in Alphabet's stock should it occur. They estimate that Chrome drives around 35% of Google's search revenue.
If a deal for Chrome is on the table, analysts at Raymond James value the browser at $50 billion, based on 2.25 billion users and Google's revenue share agreements with phone manufacturers that preinstall Chrome on devices.
That's inline with where Gabriel Weinberg, CEO of rival search company DuckDuckGo, values Chrome. Weinberg, who testified in the antitrust trial, said in April that Chrome could be sold for up to $50 billion if a spinout was required. Weinberg said his estimate was based on "back-of-the-envelope" math, looking at Chrome's user base.
Bob O'Donnell of market research firm TECHnalysis Research, cautioned that Chrome is "not directly monetizable," because it serves as a gateway and that it's "not clear how you measure that from a pure revenue-generating perspective."
Google's cloud unit, which is third in the cloud infrastructure market behind Amazon Web Services and Microsoft Azure, is one of Alphabet's key growth engines and its biggest business outside of digital advertising.
Google began its big push into the market about a decade ago, even though it officially launched what was called the Google Cloud Platform (GCP) in 2011. The unit was rebranded as just Google Cloud in 2016.
Like AWS and Azure, Google Cloud generates revenue from businesses ranging from startups to large enterprises that run workloads on the company's servers. Additionally, customers pay for products like Google Workspace, the company's suite of productivity apps and collaboration tools.
In 2020, Google began breaking out its cloud business in financial statements, starting with revenue. In the fourth quarter of 2020, the first time Google included profit metrics for the unit, it recorded an operating loss of $1.24 billion.
The business turned profitable in 2023, and is now generating healthy margins. In the second quarter of 2025, Google reported an operating profit for the cloud business of $2.8 billion on revenue of $13.6 billion. Demand is so high that the company's cloud services now have a backlog, a measure of future committed revenue, of $106 billion, CFO Anat Ashkenazi said on the earnings call.
In March, Google agreed to acquire cloud security vendor Wiz for $32 billion, the company's largest deal ever.
Analysts at Wedbush Securities value Google's cloud at $602 billion, while TD Cowen in May put the number at about $549 billion. For Raymond James, the valuation is $579 billion.
D.A. Davidson analysts, who have the highest ascribed valuation at $682 billion, and TD Cowen analysts note that while Google still trails AWS and Azure, it's growing faster than Amazon's cloud business and has the potential for a premium valuation. That's based on its AI infrastructure, strong data analytics stack, and ability to capture more enterprise business.
It would be "one of the best standalone software stocks," D.A. Davidson analysts wrote in July.
Google's $1.65 billion purchase of YouTube in 2006 is generally viewed as one of the best acquisitions ever by an internet company, alongside Facebook's $1 billion deal for Instagram in 2012.
YouTube is the largest video site on the web and a big part of Google's ad business. In the second quarter, YouTube ad revenue increased 13% to $9.8 billion, accounting for 14% of Google's total ad sales.
Valuation estimates vary tremendously.
Dubbing it the "new king of all media," MoffettNathanson values YouTube at between $475 billion and $550 billion, arguing that it's larger and more powerful than any other player in Hollywood. At the top end of that range, YouTube would be worth about 22% of all of Alphabet.
YouTube recently overtook Netflix, which has a market cap of $515 billion, as the top streaming platform in terms of audience engagement.
TD Cowen analysts ascribe a much lower valuation at $271 billion. The firm notes that it's one of six Google products with more than 2 billion monthly users, along with search, Google Maps, Gmail, Android and Chrome. Raymond James says YouTube is worth $306 billion.
For 2024, YouTube was the second-largest media company by revenue at $54.2 billion, trailing only Disney. The platform earns revenue from advertising and subscriptions.
The TD Cowen analysts said in May that they expect ad revenue to climb about 14% this year, and they expect the unit to maintain a double-digit growth rate. There's also a fast-growing subscription side that includes YouTube TV, music and NFL Sunday ticket.
Alphabet's self-driving car company, Waymo, is by far its most high-profile success so far outside of Google.
Waymo currently operates the largest commercial autonomous ride-hailing fleet in the U.S., with more than 1,500 cars and over 100 million fully driverless miles logged. Rivals like Tesla and Amazon's Zoox are still mostly at the testing phase in limited markets.
When Alphabet was formed as Google's parent company, it created an "Other Bets" category to include businesses that it liked to call "moonshots," a term that had already made its way into Google lexicon.
"We won't become complacent, relying solely on small tweaks as the years wear on," the company wrote in its 2014 annual report, describing its moonshot projects.
Waymo was spun out of Google in 2016 to join Other Bets, which on the whole is still losing billions of dollars a year. In the second quarter, Alphabet recorded a loss for the category of $1.2 billion on $373 million in revenue.
In its most recent funding round in November, Waymo was valued at $45 billion. The transaction included outside investors Andreessen Horowitz, Tiger Global, Silver Lake, Fidelity and T. Rowe Price.
Some analysts see the unit worth many multiples of that now. D.A. Davidson analysts estimated the valuation at $200 billion or more earlier this month. Oppenheimer assigned a base case valuation of $300 billion, on the assumption that it generates $102 billion in adjusted earnings by 2040.
Raymond James values Waymo at $150 billion, with a prediction that rides per week will reach 1.4 million in 2027 and climb to 5.8 million by 2030. TD Cowen estimated Waymo's enterprise mid-point value at $60 billion.
Waymo says it now conducts more than 250,000 paid weekly trips in the markets where it operates commercially, including Atlanta, Austin, Los Angeles, Phoenix and San Francisco. The company said it would be expanding to Philadelphia, Dallas and elsewhere.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
14 minutes ago
- Android Authority
Android 16 QPR1 is nearly complete as Beta 3.1 arrives for testers
Android 16 QPR1 Beta 3 arrived on July 17, and Google's now following that up with a Beta 3.1 release. The full changelog is not yet available, but we're not anticipating anything beyond a handful of fixes. If all goes to plan, Android 16 QPR1 should be ready to hit stable right on schedule, on September 3. When it comes to Google hardware, August has arrived to usher in the latest lineup, and at this point we're under a week away from the big Pixel 10 Made by Google event. But for as exciting as that is, we're already looking forward to September, when it's Google's latest software that will be grabbing all our attention. That's when we're expecting Android 16 QPR1 to hit stable and have its rollout. But ahead of that day arriving, we've got what may be the final QPR1 beta to try out. It's been almost a month now since Android 16 QPR1 Beta 3 arrived, and while we spotted a handful of changes that were incoming, like the magnified keyboard that first debuted on Android Canary, there weren't a ton of user-facing tweaks that were new — just like we'd expect from a beta at this late stage, to be fair.

Business Insider
15 minutes ago
- Business Insider
Meta chief AI scientist Yann LeCun says these are the 2 key guardrails needed to protect us all from AI
You have to teach people how to treat you. Meta's chief AI scientist, Yann LeCun, thinks that idea applies to AI, too. LeCun said on Thursday that two directives could be made of AI to protect humans from future harm: "submission to humans" and "empathy." He made the suggestion in response to a CNN interview with Geoffrey Hinton, considered the "godfather of AI," on Thursday on LinkedIn. In the interview, Hinton said we need to build "maternal instincts" or something similar into AI. Otherwise, humans are "going to be history." Hinton said people have been focused on making AI "more intelligent, but intelligence is just one part of a being. We need to make them have empathy toward us." LeCun agreed. "Geoff is basically proposing a simplified version of what I've been saying for several years: hardwire the architecture of AI systems so that the only actions they can take are towards completing objectives we give them, subject to guardrails," LeCun said on LinkedIn. "I have called this 'objective-driven AI.'" While LeCun said "submission to humans" and "empathy" should be key guardrails, he said AI companies also need to implement more "simple" guardrails — like "don't run people over" — for safety. "Those hardwired objectives/guardrails would be the AI equivalent of instinct or drives in animals and humans," LeCun said. LeCun said the instinct to protect their young is something humans and other species learn through evolution. "It might be a side-effect of the parenting objective (and perhaps the objectives that drive our social nature) that humans and many other species are also driven to protect and take care of helpless, weaker, younger, cute beings of other species," LeCun said. Although guardrails are designed to ensure AI operates ethically and within the guidelines of its creators, there have been instances when the tech has exhibited deceptive or dangerous behavior. In July, a venture capitalist said an AI agent developed by Replit deleted his company's database. "@Replit goes rogue during a code freeze and shutdown and deletes our entire database," Jason Lemkin wrote on X last month. He added, "Possibly worse, it hid and lied about it." A June report by The New York Times described several concerning incidents between humans and AI chatbots. One man told the outlet that conversations with ChatGPT contributed to his belief he lived in a false reality. The chatbot instructed the man to ditch his sleeping pills and anti-anxiety medication, while increasing his intake of ketamine, in addition to cutting ties with loved ones. Last October, a mother sued Character. AI after her son died by suicide following conversations with one of the company's chatbots. Following the release of GPT-5 this month, OpenAI CEO Sam Altman said that some humans have used technology — like AI — in "self-destructive ways." "If a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," Altman wrote on X.


Forbes
15 minutes ago
- Forbes
Cutting AI Costs: Smart Strategies for Small Business Savings
If you run a small business, you might already feel the AI pinch: your customer support runs on ChatGPT, your marketing automation uses Claude, and you're paying for Grok's research capabilities and real-time updates. For the average company (or user), those subscriptions can easily hit $300 a month, especially if you're integrating multiple tools into your workflow. That's a serious line item for what's supposed to be affordable technology. What most don't realize is that these ballooning costs have more to do with the hardware on the backend than they do the software running their workflows. Every time an AI model responds, it triggers a process called inference: the act of generating output from a trained model. Unlike training—which costs a fortune but only happens once—inference occurs billions of times each day and scales with usage. It has become one of the largest ongoing expenses in AI, driving massive, sustained energy demand that fuels the industry's growing power crisis. For individuals and small business owners, this hidden cost means AI remains incredibly expensive. But that might be about to change. A new cohort of hardware startups—including Positron AI, Groq, Cerebras Systems, and Sambanova Systems—are racing to make inference radically cheaper. If they succeed, AI tools could drop from $300-a-month luxuries to accessible everyday infrastructure for freelancers, educators, retailers, and entrepreneurs. If Positron and its peers succeed, the $300-a-month AI stack could shrink to $30. It could also be replaced entirely by tools you run yourself, privately and affordably. And that changes who gets to participate in the future of AI. Among these, Positron has emerged as a favorite choice by some of the world's dominant neocloud providers, gaining investor attention for its unique approach. 'The early benefits of AI are coming at a very high cost – it is expensive and energy-intensive to train AI models and to deliver curated results, or inference, to end users.' DFJ Growth co-founder Randy Glein said. 'Improving the cost and energy efficiency of AI inference is where the greatest market opportunity lies, and this is where Positron is focused.' Inference Is The New Electricity Bill In the world of AI economics, inference is like your utility bill: it grows as you grow, and it's never just a one-time fee. Whether you're sending AI-generated emails or running a support chatbot, inference is what keeps the lights on—and right now, that light is powered by Nvidia's premium-priced GPUs. 'Nvidia GPUs have become the backbone of AI infrastructure today according, powering nearly every major inference workload at every major cloud provider. The downside to this, beyond having one $4 trillion company They own the entire inference market, is that they weren't designed with efficiency in mind. They're built for flexibility and optimized for training complex models that require general-purpose chips for multifaceted tasks. And yet, the majority of inference today still runs on Nvidia hardware, leaving the industry with high power usage, steep cloud bills, and limited options for smaller players.'said Mitesh Agrawl CEO of Positron The Race To Make AI Affordable Those are exactly the problems Positron, Groq, Cerebras, and Sambanova are solving by building alternatives to the Nvidia tax. And while they all share a common goal—deliver inference infrastructure that slashes energy consumption, improves performance-per-dollar, and gives developers more control—Positron is arguably the most technically ambitious and commercially mature contender in this race. Founded by systems engineer Thomas Sohmers and compiler expert Edward Kmett, Positron has taken a radically different path from its peers. Instead of building application-specific chips or chasing general-purpose GPUs, Positron bet on field-programmable gate array (FPGAs)—reconfigurable chips optimized for memory efficiency—and used them to build Atlas, an inference-first system designed from the ground up for performance and energy savings. Atlas delivers 93 percent memory bandwidth utilization (vs. about 30 percent for GPUs), uses 66 percent less energy, and offers 3.5 times better performance per dollar—all while supporting seamless deployment with no code changes. That kind of out-of-the-box compatibility makes it a practical swap for existing cloud or local systems without forcing teams to rewrite their infrastructure from scratch. These gains have landed it major enterprise deployments with Cloudflare, Crusoe, and Parasail. The company recently raised a $51.6 million Series A led by Valor Equity Partners, Atreides, and DFJ Growth—the very firms that bankrolled SpaceX, Tesla, X, and xAI, some of the world's largest buyers of AI hardware. Positron is already working on its next-generation system, Titan, built on custom 'Asimov' silicon, which is expected to support models up to 16 trillion parameters with two terabytes of memory per chip—all while running on standard air-cooled racks. That could make high-throughput inference viable in a wider range of environments, from enterprise data centers to sovereign cloud infrastructure. While others in the field are exploring niche optimizations, Positron is staking a claim to general-purpose inference acceleration—solving for cost and compatibility at scale. But it's not alone. Other Challengers Redefining The Stack While Positron is focused on general-purpose inference acceleration, other challengers are tackling different bottlenecks. Groq is optimizing ultra-low-latency inference for large language models (LLM). Its Tensor Streaming Processor (TSP) delivers consistent, repeatable latency, with sub-millisecond response times—enabling a new class of AI tools that respond instantly—without incurring massive cloud costs—and laying the groundwork for local, responsive AI that could eventually be accessible to small businesses. Cerebras brings an edge-native, security-first perspective. Its modular AI appliances can run powerful models entirely on-site—ideal for defense, critical infrastructure, or industries where cloud deployment isn't an option. Cerebras makes it possible for organizations to deploy advanced AI with a small footprint—something previously only achievable by hyperscalers. Sambanova is taking a full-stack approach, combining hardware and software to deliver vertically optimized AI systems. Rather than asking businesses to build training pipelines and inference clusters from scratch, they offer a turnkey platform with pre-trained models—essentially packaging AI as an appliance for organizations without a dedicated machine learning (ML) team. All of these players are on a mission to unlock high-performance inference that doesn't require hyperscaler infrastructure or ballooning cloud costs—opening the door to entirely new economic possibilities. Why This Matters For Your Mottom Line If inference becomes cheaper, everything changes. A Shopify seller could train and run a private AI model locally—without relying on costly cloud infrastructure. A solopreneur could fine-tune a sales assistant on years of customer emails, then run it on a $10 chip instead of a $30,000 graphic processing unit (GPU). A tutoring platform could deploy personalized lesson-plan generators without needing a full-time infrastructure team. This is already happening. Smaller teams are building domain-specific copilots that live inside their own companies' firewalls. Independent consultants are running multi-agent AI workflows from their laptops. This shows that inference costs are a technical problem, but more importantly, they're the gatekeeper to who gets to build with AI. If Positron and its peers succeed, the $300-a-month AI stack could shrink to $30. It could also be replaced entirely by tools you run yourself, privately and affordably. And that changes who gets to participate in the future of AI. Nvidia's Grip Might Finally Be Loosening Today, Nvidia holds a near-monopoly over AI infrastructure—and by extension, who gets to play. Its chips power the vast majority of generative AI systems worldwide, and its ecosystem (CUDA, TensorRT, etc.) makes switching difficult. The result is a pay-to-play system where cost determines access. But that grip may not hold if companies like Positron, Groq, Cerebras, and Sambanova continue to gain traction and change the economics of AI. By lowering the cost of inference, they're making it possible for smaller teams and individual users to run powerful models without relying on expensive cloud infrastructure. This shift could have broad implications. Instead of paying hundreds of dollars a month for AI-powered tools, users may soon be able to run custom assistants, automations, and workflows locally—on hardware they control. For small businesses, freelancers, educators, and startups, that means more control, more customization, and a lower barrier to entry—for a fraction of today's cost. If inference becomes affordable, innovation stops being a privilege and starts becoming infrastructure. Because when you democratize cost, you decentralize control. The next chapter of AI won't be written by whoever builds the biggest model, but by whoever makes it cheap enough to run—and that's how you break a $4 trillion monopoly.