
Aravind Srinivas' net worth: How much does Perplexity CEO earn as it makes bid to buy Chrome browser for $34.5 billion?
The three-year-old company has secured approximately $1 billion in funding to date from investors such as Nvidia and Japan's SoftBank, and was most recently valued at $14 billion. Let's take a look at Aravind Srinivas' net worth amid IIT graduate's Perplexity proposing double the value of the startup, (reportedly $18 billion in a latest funding round) in a letter to intent to Google CEO Sundar Pichai.
Perplexity AI's CEO and co-founder Aravind Srinivas has achieved an impressive milestone, reaching a valuation of $1 billion (approximately ₹ 8,300 crore) in just two years, according to ET Now. However, this represents only a fraction of his growing business empire.
His company is estimated to bring in nearly $50 million in annual revenue, and Aravind has also established himself as a notable investor, holding stakes in companies such as Chennai Meenakshi Multispeciality Hospital Ltd. and eMudhra Ltd, reports said.
The 31-year-old's personal wealth is reportedly estimated at around ₹ 223.8 crore.
Driven by a strong interest in machine learning, Srnivas initially faced setbacks when he couldn't transition to a computer science program to formally pursue his ambitions. Despite this, his determination led him to teach himself Python and stand out in Kaggle competitions. His dedication eventually earned him an internship with renowned deep learning expert Yoshua Bengio, which later opened the door to a PhD in Artificial Intelligence at UC Berkeley.
Perplexity already offers an AI-powered browser called Comet, capable of performing certain tasks on behalf of users. Acquiring Chrome would give the company access to the browser's massive user base of over three billion, significantly boosting its ability to compete with larger players like OpenAI, which is also developing its own AI browser.
According to a term sheet seen by Reuters, Perplexity's offer includes a commitment to keep Chromium, the open-source code behind Chrome, invest $3 billion over the next two years, and retain Chrome's existing default search engine settings. The company emphasised that the proposal, which involves no equity component, is designed to protect user choice and reduce potential concerns around future market competition.
However, analysts believe Google would probably not sell Chrome and could face a long legal battle to avoid that result, considering its important to the company's AI push as it introduces major updates like AI-generated search summaries, called Overviews.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
2 hours ago
- India Today
Google Pixel 10 to launch in 3 days: Everything we know
Google Pixel 10 to launch in 3 days: Everything we know By Unnati Gusain Google has officially announced the Pixel 10, which is set to launch on August 20. Here is how it will be different than Pixel 9. Google Pixel 10 launch Based on the official teaser, the upcoming Pixel 10 will feature a similar design as the Pixel 9. Goolge Pixel 10 design The Pixel 10 series is expected to be powered by Google's new Tensor G5 chipset, which — for the first time — is reportedly being manufactured by TSMC. Pixel 10 Processor In the heart, the Pixel 9 is powered by Google's Tensor G4 processor, paired with the Titan M2 security coprocessor. Pixel 9 processor Another big change could be the inclusion of a telephoto lens on the base Pixel 10 model — a first for a non-Pro Pixel. Pixel 10 camera For photography, the Pixel 9 smartphone houses a dual-rear camera setup, consisting of a 50-megapixel main camera with OIS and a 48-megapixel ultrawide lens. Pixel 9 Camera Pixel 10 will arrive with the final version of Android 16. Pixel 10 OS Pixel 9 runs on Android 14 and promises 7 years of Android OS and security updates, so you are getting long-term software support. Pixel 9 OS


Mint
3 hours ago
- Mint
Perplexity's $34.5 billion bid for Google's Chrome seems mischievous
If you're fighting an antitrust lawsuit that might end up breaking your company into pieces, one defence is to argue that those pieces would wither away if separated from the mother ship, thus creating a worse outcome for the consumer. That's what Google has been doing in the face of the US Department of Justice (DoJ) calls for it to sell Chrome, its market-leading web browser, as part of the remedies for its monopolistic behaviours involving its search business. As the company wrote on its blog in May, the DoJ's proposal to break off Chrome—which billions of people use for free—would break it and result in a 'shadow of the current Chrome," according to Chrome leader Parisa Tabriz. She added that the browser would likely become 'insecure and obsolete." Also Read: Manu Joseph: Who'd have thought Google could be replaced This defence was complicated somewhat on 12 August when it emerged that Perplexity, an AI company, had made an 'audacious" (Bloomberg), 'longshot" (Wall Street Journal) and 'mischievous" (my term) bid to take Chrome off Google's hands for $34.5 billion. Perplexity doesn't have $34.5 billion. The company was valued at $18 billion at the time of its last funding round, but said it would come up with funds from a coalition of investors who are already on board with the plan. The deal would realistically be possible only if the court does force the Alphabet unit to sell Chrome, which, according to most analysts I've spoken to, would be an extreme measure. But it's not an impossibility. Indeed, it might have become slightly more possible thanks to Perplexity's bid and what might come next. But before I get into that, let's humour this for a second and talk about why buying Chrome would make sense for Perplexity. The web browser has become a critical early battleground for shaping new habits in AI. Perplexity realizes this and recently introduced its own browser, Comet, which places its own AI assistant front and centre: If you type a query into the address bar, Comet will, instead of searching Google, turn to its AI instead. Also Read: Google's ad-tech dominance is easier to fix than its search monopoly At scale, this shift in behaviour from search engine to AI would be profound. The problem is that Comet has a tiny market share compared with Chrome's 70% of desktop browser use globally and 67% on mobile phones. Following loose estimates of about 3.5 billion users of Chrome, Perplexity would be paying about $10 per user. The goal then would be to convert as many of them as possible to users of its $20-a-month 'Pro' AI plan. As AI business models go, it's actually not bad. Unlike its biggest competitors, Perplexity lacks a shop window for its AI, an existing highly-used product where users can discover the functionality of AI without having to consciously go looking for it. Still, the lack of movement in Alphabet's share price on 12 August suggests investors have brushed off the possibility. For starters, some analysts think the valuation is way off. The offer 'vastly undervalues the asset, and should not be taken seriously," according to Baird. A better number, its analysts said, would be more like $100 billion—though it's hard to say how the dynamics of a deal would play out if Google had no choice but to sell Chrome. Previous valuations put it somewhere between $30 billion and $50 billion, a figure that seems a little conservative if the browser is indeed pivotal to building AI market share. Also Read: Google and Meta antitrust cases show why we need a policy pincer to foster competition in digital markets Regardless, what this bid truly represents is a cunning plan to get in the ear of Judge Amit Mehta as he considers the appropriate antitrust remedies for Google's prior bad behaviour. With this move, Perplexity is skewering Google's defence that spinning out Chrome would be fatal to not just Chrome but to Chromium, the open-source project that forms the backbone of most top web browsers, including Google's direct competitors. It can now be sincerely argued that there's a bona fide offer from a company capable of not only taking Chrome out of Google's hands but developing it further—keeping it from becoming 'insecure and obsolete," as the company warned. What's more, it seems likely other AI companies will throw their names into the ring. OpenAI's head of ChatGPT testified during the trial that the company would be interested in buying Chrome, 'as would many other parties." How much of this the judge takes into account is another thing. He probably shouldn't: The rationale to force a sale of Chrome would be to prevent Alphabet from creating a new AI monopoly with the same tactics it used to dominate search. Fine, but Judge Mehta has other tools at his disposal to achieve that more fairly. After all, the only reason an AI company would be interested in buying Chrome, at a cost that's double its existing value, would be to use the browser for those same anti-competitive ends. ©Bloomberg The author is Bloomberg Opinion's US technology columnist.


Mint
4 hours ago
- Mint
Inside India's two-track strategy to become an AI powerhouse
Bengaluru: At Google's annual I/O Connect event in Bengaluru this July, the spotlight was on India's AI ambitions. With over 1,800 developers in attendance, the recurring theme echoing across various panel discussions, product announcements and workshops was that of building AI capability for India's linguistic diversity. With 22 official languages and hundreds of spoken dialects, India faces a monumental challenge in building AI systems that can work across this multilingual landscape. In the demo area of the event, this challenge was front and centre, with startups showcasing how they're tackling it. Among those were Sarvam AI, demonstrating Sarvam-Translate, a multilingual model fine-tuned on Google's open-source large language model (LLM), Gemma. Next to it, CoRover demonstrated BharatGPT, a chatbot for public services such as the one used by the Indian Railway Catering and Tourism Corporation (IRCTC). At the event, Google announced that AI startups Sarvam, Soket AI and Gnani are building the next generation of India AI models, fine-tuning them on Gemma. At first glance, this might seem contradictory. Three of these startups are among the four selected to build India's sovereign large language models under the ₹10,300 crore IndiaAI Mission, a government initiative to develop home-grown foundational models from scratch, trained on Indian data, languages and values. So, why Gemma? Building competitive models from scratch is a resource-heavy task involving multiple challenges and India does not have the luxury of building from scratch, in isolation. With limited high-quality training datasets, an evolving compute infrastructure and urgent market demand, the more pragmatic path is to start with what is available. These startups are therefore taking a layered approach, fine-tuning open-source models to solve real-world problems today, while simultaneously building the data pipelines, user feedback loops and domain-specific expertise needed to train more indigenous and independent models over time. Fine-tuning involves taking an existing large language model already trained on vast amounts of general data and teaching it to specialize further on focused and often local data, so that it can perform better in those contexts. Build and bootstrap Project EKA, an open-source community driven initiative led by Soket, is a sovereign LLM effort, being developed in partnership with IIT Gandhinagar, IIT Roorkee and IISc Bangalore. It is being designed from scratch by training code, infrastructure and data pipelines, all sourced within India. A 7 billion-parameter model is expected in the next four-five months, with a 120 billion-parameter model planned over a 10-month cycle. 'We've mapped four key domains: agriculture, law, education and defence," says Abhishek Upperwal, co-founder of Soket AI. 'Each has a clear dataset strategy, whether from government advisory bodies or public-sector use cases." A key feature of the EKA pipeline is that it is entirely decoupled from foreign infrastructure. Training happens on India's GPU cloud and the resulting models will be open-sourced for public use. The team, however, has taken a pragmatic approach, using Gemma to run initial deployments. 'The idea is not to depend on Gemma forever," Upperwal clarifies. 'It's to use what's there today to bootstrap and switch to sovereign stacks when ready." CoRover's BharatGPT is another example of this dual strategy in action. It currently runs on a fine-tuned model, offering conversational agentic AI services in multiple Indian languages to various government clients, including IRCTC, Bharat Electronics Ltd, and Life Insurance Corporation. 'For applications in public health, railways and space, we needed a base model that could be fine-tuned quickly," says Ankush Sabharwal, CoRover's founder. 'But we have also built our own foundational LLM with Indian datasets." Like Soket, CoRover treats the current deployments as both service delivery and dataset creation. By pre-training and fine-tuning Gemma to handle domain-specific inputs, it is trying to improve accessibility today while building a bridge to future sovereign deployments. 'You begin with an open-source model. Then you fine-tune it, add language understanding, lower latency and expand domain relevance," Sabharwal explains. 'Eventually, you'll swap out the core once your own sovereign model is ready," he adds. Amlan Mohanty, a technology policy expert, calls India's approach an experiment in trade-offs, betting on models such as Gemma to enable rapid deployment without giving up the long-term goal of autonomy. 'It's an experiment in reducing dependency on adversarial countries, ensuring cultural representation and seeing whether firms from allies like the US will uphold those expectations," he says. Mint reached out to Sarvam and Gnani with detailed queries regarding their use of Gemma and its relevance to their sovereign AI initiatives, but the companies did not respond. Why local context is critical For India, building its own AI capabilities is not just a matter of nationalistic pride or keeping up with global trends. It's more about solving problems that no foreign model can adequately address today. Think of a migrant from Bihar working in a cement factory in rural Maharashtra, who goes to a local clinic with a persistent cough. The doctor, who speaks Marathi, shows him a chest X-ray, while the AI tool assisting the doctor explains the findings in English, in a crisp Cupertino accent, using medical assumptions based on Western body types. The migrant understands only Hindi and much of the nuance is lost. Far from being just a language problem, it's a mismatch in cultural, physiological and contextual grounding. A rural frontline health worker in Bihar needs an AI tool that understands local medical terms in Maithili, just as a farmer in Maharashtra needs crop advisories that align with state-specific irrigation schedules. A government portal should be able to process citizen queries in 15 languages with regional variations. These are high-impact and everyday use cases where errors can directly affect livelihoods, functioning of public services and health outcomes. Fine-tuning open models gives Indian developers a way to address these urgent and ground-level needs right now, while building the datasets, domain knowledge and infrastructure that can eventually support a truly sovereign AI stack. This dual-track strategy is possibly one of the fastest ways forward, using open tools to bootstrap sovereign capacity from the ground up. 'We don't want to lose the momentum. Fine-tuning models like Gemma lets us solve real-world problems today in applications such as agriculture or education, while we build sovereign models from scratch," says Soket AI's Upperwal. 'These are parallel but separate threads," says Upperwal. 'One is about immediate utility, the other about long-term independence. Ultimately these threads will converge." A strategic priority The IndiaAI Mission is a national response to a growing geopolitical issue. As AI systems become central to education, agriculture, defence and governance, over-reliance on foreign platforms raises the risks of data exposure and loss of control. This was highlighted last month when Microsoft abruptly cut off cloud services to Nayara Energy after European Union sanctions on its Russian-linked operations. The disruption, which was reversed only after a court intervention, raised alarms on how foreign tech providers can become geopolitical pressure points. Around the same time, US President Donald Trump doubled tariffs on Indian imports to 50%, showing how trade and tech are increasingly being used as leverage. Besides reducing dependence, sovereign AI systems are also important for India's critical sectors to accurately represent local values, regulatory frameworks and linguistic diversity. Most global AI models are trained on English-dominant and Western datasets, which make them poorly equipped to handle the realities of India's multilingual population or the domain-specific complexity of its systems. This becomes a challenge when it comes to applications such as interpreting Indian legal judgments or accounting for local crop cycles and farming practices in agriculture. Mohanty says that sovereignty in AI isn't about isolation, but about who controls the infrastructure and who sets the terms. 'Sovereignty is basically about choice and dependencies. The more choice you have, the more sovereignty you have." He adds that full-stack independence from chips to models is not feasible for any country, including India. Even global powers such as the US and China balance domestic development with strategic partnerships. 'Nobody has complete sovereignty or control or self-sufficiency across the stack, so you either build it yourself or you partner with a trusted ally." Mohanty also points out that the Indian government has taken a pragmatic approach by staying agnostic to the foundational elements of its AI stack. This stance is shaped less by ideology and more by constraints such as lack of Indic data, compute capacity and ready-made open-source alternatives built for India. India's data lacunae Despite the momentum behind India's sovereign AI push, the lack of high-quality training data, particularly in Indian languages, continues to be one of its most fundamental roadblocks. While the country is rich in linguistic diversity, that diversity has not translated into digital data that AI systems can learn from. Manish Gupta, director of engineering at Google DeepMind India, cited internal assessments that found that 72 of India's spoken languages, which had over 100,000 speakers, had virtually no digital presence. 'Data is the fuel of AI and 72 out of those 125 languages had zero digital data," he says. To address this linguistic challenge for Google's India market, the company launched Project Vaani in collaboration with the Indian Institute of Science (IISc). This initiative aims to collect voice samples across hundreds of Indian districts. The first phase captured over 14,000 hours of speech data from 80 districts, representing 59 languages, 15 of which previously had no digital datasets. The second phase expanded coverage to 160 districts and future phases aim to reach all 773 districts in India. 'There's a lot of work that goes into cleaning up the data, because sometimes the quality is not good," Gupta says, referring to the challenges of transcription and audio consistency. Google is also developing techniques to integrate these local language capabilities into its large models. Gupta says that learnings from widely spoken languages such as English and Hindi are helping improve performance in lower-resource languages such as Gujarati and Tamil, largely due to cross-lingual transfer capabilities built into multilingual language models. The company's Gemma LLM incorporates Indian language capabilities derived from this body of work. Gemma ties into LLM efforts run by Indian startups through a combination of Google's technical collaborations, infrastructure guidance and by making its collected datasets publicly available. According to Gupta, the strategy is driven by both commercial and research imperatives. India is seen as a global testbed for multilingual and low-resource AI development. Supporting local language AI, especially through partnerships with startups such as Sarvam, Soket AI and allows Google to build inclusive tools that can scale beyond India to include other linguistically complex regions in Southeast Asia and Africa. For India's sovereign AI builders, the lack of readymade and high-quality Indic datasets means that model development and dataset creation must happen in parallel. For the Global South India's layered strategy to use open models now, while concurrently building sovereign models, also offers a roadmap for other countries navigating similar constraints. It's a blueprint for the Global South, where nations are wrestling with the same dilemma on how to build AI systems that reflect local languages, contexts and values without the luxury of vast compute budgets or mature data ecosystems. For these countries, fine-tuned open models offer a bridge to capability, inclusion, and control. 'Full-stack sovereignty in AI is a marathon, not a sprint," Upperwal says. 'You don't build a 120 billion model in a vacuum. You get there by deploying fast, learning fast and shifting when ready." Singapore, Vietnam and Thailand are already exploring similar methods, using Gemma to kickstart their local LLM efforts. By 2026, when India's sovereign LLMs, including EKA, are expected to be production-ready, Upperwal says the dual track will likely converge, and bootstrapped models will fade while homegrown systems may take their place. But even as these startups build on open tools such as Meta's Llama or Google's Gemma, which are engineered by global tech giants, the question of dependency continues to loom. Even for open-source models, control over architecture, training techniques and infrastructure support still leans heavily on Big Tech. While Google has open-sourced speech datasets, including Project Vaani, and extended partnerships with IndiaAI Mission startups, the terms of such openness are not always symmetrical. India's sovereign plans, therefore, depend not on shunning open models but on eventually outgrowing them. 'If Google is directed by the US government to close down its weights (model parameters), or increase API (application programming interface) prices or change transparency norms, what would the impact be on Sarvam or Soket?" questions Mohanty, adding that while the current India-US tech partnership is strong, future policies could shift and jeopardize India's digital sovereignty. In the years ahead, India and other nations in the Global South will face a critical question over whether they can convert this borrowed support into a complete, sovereign AI infrastructure, before the terms of access shift or the window to act closes.