
Perplexity CEO Aravind Srinivas says Google ‘a giant bureaucratic organization': ‘At some point, they need to…'
Perplexity CEO
Aravind Srinivas recently took an aim at Google saying their core business model is not built for the AI-driven future of web browsing. In a Reddit "Ask Me Anything" on July 16, Srinivas said that Google's reliance on ads is at odds with AI agents. He wrote 'They have business model constraints on letting agents do the clicks and work for you while continuing to charge advertisers enormous money to keep bidding for clicks and conversions'. Stating that the tech giant is constrained by its need to protect ad revenue, he said 'At some point, they need to embrace one path and suffer, in order to come out stronger; rather than hedging and playing both ways'.
Google is a giant bureaucratic organization: Perplexity CEO
Aravind Srinivas also criticized Google's internal structure, calling it 'a giant bureaucratic organization' with 'too many decision makers and disjoint teams.'
Srinivas, during the AMA session, said that he expects Google to "pay close attention" and eventually copy or adopt features from Comet – the company's AI-powered web browser.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Greatest Female Singers, Ranked
Drivepedia
Undo
At a Y Combinator event in June, Perplexity CEO said 'If your company is something that can make revenue on the scale of hundreds of millions of dollars or potentially billions of dollars, you should always assume that a model company will copy it'.
Aravind Srinivas also referred to Google's internal
Project Mariner
as 'similar but quite limited.' He said the browser is designed to prioritize users, not advertisers.
Stating that 'We underestimated people's willingness to pay,' he added that 'We also want to bring a change to this world. Enough of the monopoly of Google'.
Comet is currently available by invitation and only to users of Perplexity's top-tier plan, priced at $200/month or $2,000/year. A free version is planned.
Despite his criticism, Srinivas acknowledged the browser wouldn't be possible without Chromium, the open-source project maintained by Google.
Bill Gates No Longer among Top 10 Billionaires: The Real Reason
AI Masterclass for Students. Upskill Young Ones Today!– Join Now
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
10 minutes ago
- India Today
AI godfather warns tech giants are downplaying AI risks, says only DeepMind's Demis Hassabis gets it
Geoffrey Hinton, widely known as the 'Godfather of AI,' is raising concerns about the rapid development of AI and the fact that major technology companies are downplaying its dangers. Speaking on the One Decision podcast, Hinton said many corporate leaders are aware of the risks but are avoiding taking meaningful action. 'Many of the people in big companies, I think, are downplaying the risk publicly,' Hinton said. 'People like Demis, for example, really do understand the risks and really want to do something about it.'advertisementHinton, who is also a Nobel Laureate, was awarded the 2024 Nobel Prize in Physics alongside John J. Hopfield for their work on artificial neural networks. In fact Hinton's decade-long research paved the way for today's rapid advancement in artificial intelligence. However, he is now warning that advanced AI systems are becoming smarter and smarter and are even learning in ways humans don't fully understand. 'The rate at which they've started working now is way beyond what anybody expected,' he also admitted that he regrets not recognizing these dangers earlier in his career: 'I should have realized much sooner what the eventual dangers were going to be. I always thought the future was far off and I wish I had thought about safety sooner.' Hinton left Google in 2023 after more than a decade at the company. His departure was widely interpreted as his protest against its aggressive AI push. However in the podcast Hinton clarified that this narrative was not true and instead exaggerated. 'There's a wonderful story that the media loves this honest scientist who wanted to tell the truth so I had to leave Google. It's a myth,' Hinton said. 'I left Google because I was 75 and I couldn't program effectively anymore but when I left, maybe I could talk about all these risks more freely.'He added that staying at Google would have inevitably meant some level of self-censorship. 'You can't take their money and then not be influenced by what's in their own interest,' he the podcast, Hinton also spoke about Demis Hassabis and praised hims as one of the few leaders who 'really wants to do something about' the risks of AI. Hassabis, who sold DeepMind to Google in 2014 now heads its AI research arm. While he talks about their development, he has also long expressed concern about the potential misuse of advanced AI this year, in an interview with CNN, Hassabis admitted he is worried about AI. But he said he is less concerned about AI replacing jobs and more focused on the possibility that the technology could fall into the wrong hands.'A bad actor could repurpose those same technologies for a harmful end,' Hassabis told CNN's Anna Stewart. 'And so one big thing is how do we restrict access to these systems, powerful systems, to bad actors but enable good actors to do many, many amazing things with it?'- Ends


India Today
10 minutes ago
- India Today
Should you double-check your doctor with ChatGPT? Yes, you absolutely should
First, there was Google. Or rather Doctor Google, as it is mockingly called by the men and women in white coats, the ones who come in one hour late to see their patients and those who brush off every little query from patients brusquely and sometimes with unwarranted there is a new foe in town, and it is only now that doctors are beginning to realise it. This is ChatGPT, or Gemini, or something like DeepSeek, the AI systems that are coherent and powerful enough to act like medical guides. Doctors are, obviously, not happy about it. Just the way they enrage patients for trying to discuss with them what the ailing person finds after Googling symptoms, now they are fuming against advice that ChatGPT can dish problem is that no one likes to be double-checked. And Indian doctors, in particular, hate it. They want their word to be the gospel. Bhagwan ka roop or something like that. But frustratingly for them, the capabilities of new AI systems are such that anyone can now re-check their doctor's prescription, or can read diagnostic films and observations, using tools like ChatGPT. The question, however, is: should you do it? Absolutely yes. The benefits outweigh the harms. Let me tell you a story. This is from around 15 years ago. A person whom I know well went to a doctor for an ear infection. This was a much-celebrated doctor, leading the ENT department in a hospital chain which has a name starting with the letter F. The doctor charged the patient a princely sum and poked and probed the ear in question. After a few days of tests and consultations, a surgery — rather complex one — was recommended. It was at this time, when the patient was submitting the consent forms for the surgery that was scheduled for a few days later, that the doctor discovered some new information. He found that the patient was a journalist in a large media group, the name of which starts with the letter new information, although not related to the patient's ear, quickly changed the tune the doctor was whistling. He became coy and cautious. He started having second thoughts about the surgery. So, he recommended a second opinion, writing a reference for another senior doctor, who was the head of the ENT at a hospital chain which has a name starting with the letter A. The doctor at this new hospital carried out his own observations. The ear was probed and poked again, and within minutes he declared, 'No, surgery needed. Absolutely, no surgery needed.'What happened? I have no way of confirming this. But I believe here is what happened. The doctor at hospital F was pushing for an unnecessary and complex surgery, the one where chances of something going wrong were minimal but not zero. However, once he realised that the patient was a journalist, he decided not to risk it and to get out of the situation, relied on the doctor at hospital is a story I know, but I am sure almost everyone in this country will have similar anecdotes. At one time or another, we have all had a feeling that this doctor or that was probably pushing for some procedure, some diagnostic test, or some advice that did not sit well with us. And in many unfortunate cases, people actually underwent some procedure or some treatment that harmed them more than it helped. Medical negligence in India flies under the radar of 'doctor is bhagwan ka roop' and other other countries where medical negligence is something that can have serious repercussions for doctors and hospitals, in India, people in white coats get flexibility in almost everything that they do. A lot of it is due to the reverence that society has for doctors, the savers of life. Some of it is also because, in India, we have far fewer doctors than are needed. This is not to say that doctors in India are incompetent. In general, they are not, largely thanks to the scholastic nature of modern medicine and procedures. Most of them also work crazy long hours, under conditions that are extremely frugal in terms of equipment and highly stressful in terms of this is exactly why we should use ChatGPT to double-check our doctors in India. Because there is a huge supply-demand mismatch, it is safe to say that we have doctors in the country who are not up for the task, whether these are doctors with dodgy degrees or those who have little to no background in modern medicine, and yet they put Dr in front of their name and run clinics where they deal with most complex is precisely because doctors are overworked in India that their patients should use AI to double-check their diagnostic opinions and suggested treatments. Doctors, irrespective of what we feel about them and how we revere them, are humans at the end of the day. They are prone to making the same mistakes that any human would make in a challenging work finally, because many doctors in India — not all, but many — tend to overdo their treatment and diagnostic tests, we should double-check them with AI. Next time, when you get a CT scan, also show it to ChatGPT and then discuss with your doctor if the AI is telling you something different. In the last one year, again and again, research has highlighted that AI is extremely good at diagnosis. Just earlier this month, a new study by a team at Microsoft found that their MAI-DxO — a specially-tuned AI system for medical diagnosis — outperformed human doctors. Compared to 21 doctors who were part of the study and who were correct in only 20 per cent of cases, MAI-DxO was correct in 85 per cent of cases in its none of this is to say that you should replace your doctor with ChatGPT. Absolutely not. Good doctors are indeed precious and their consultation is priceless. They will also be better with subtleties of the human body compared to any AI system. But in the coming months and years, I have a feeling that doctors in India will launch a tirade against AI, similar to how they once fought Dr they will shame and harangue their patients for using ChatGPT for a second opinion. When that happens, we should push back. Indian doctors are not used to questions, they don't like to explain, they don't want to be second-guessed or double-checked. And that is exactly why we should ask them questions, seek explanations and double-check them, if needed, even with the help of ChatGPT.(Javed Anwer is Technology Editor, India Today Group Digital. Latent Space is a weekly column on tech, world, and everything in between. The name comes from the science of AI and to reflect it, Latent Space functions in the same way: by simplifying the world of tech and giving it a context)- Ends(Views expressed in this opinion piece are those of the author)Trending Reel


NDTV
13 minutes ago
- NDTV
AI Agents Are Here, What They Can Do And How They Can Go Wrong
Melbourne: We are entering the third phase of generative AI. First came the chatbots, followed by the assistants. Now we are beginning to see agents: systems that aspire to greater autonomy and can work in "teams" or use tools to accomplish complex tasks. The latest hot product is OpenAI's ChatGPT agent. This combines two pre-existing products (Operator and Deep Research) into a single more powerful system which, according to the developer, "thinks and acts". These new systems represent a step up from earlier AI tools. Knowing how they work and what they can do - as well as their drawbacks and risks - is rapidly becoming essential. From chatbots to agents ChatGPT launched the chatbot era in November 2022, but despite its huge popularity the conversational interface limited what could be done with the technology. Enter the AI assistant, or copilot. These are systems built on top of the same large language models that power generative AI chatbots, only now designed to carry out tasks with human instruction and supervision. Agents are another step up. They are intended to pursue goals (rather than just complete tasks) with varying degrees of autonomy, supported by more advanced capabilities such as reasoning and memory. Multiple AI agent systems may be able to work together, communicating with each other to plan, schedule, decide and coordinate to solve complex problems. Agents are also "tool users" as they can also call on software tools for specialised tasks - things such as web browsers, spreadsheets, payment systems and more. A year of rapid development Agentic AI has felt imminent since late last year. A big moment came last October, when Anthropic gave its Claude chatbot the ability to interact with a computer in much the same way a human does. This system could search multiple data sources, find relevant information and submit online forms. Other AI developers were quick to follow. OpenAI released a web browsing agent named Operator, Microsoft announced Copilot agents, and we saw the launch of Google's Vertex AI and Meta's Llama agents. Earlier this year, the Chinese startup Monica demonstrated its Manus AI agent buying real estate and converting lecture recordings into summary notes. Another Chinese startup, Genspark, released a search engine agent that returns a single-page overview (similar to what Google does now) with embedded links to online tasks such as finding the best shopping deals. Another startup, Cluely, offers a somewhat unhinged "cheat at anything" agent that has gained attention but is yet to deliver meaningful results. Not all agents are made for general-purpose activity. Some are specialised for particular areas. Coding and software engineering are at the vanguard here, with Microsoft's Copilot coding agent and OpenAI's Codex among the frontrunners. These agents can independently write, evaluate and commit code, while also assessing human-written code for errors and performance lags. Search, summarisation and more One core strength of generative AI models is search and summarisation. Agents can use this to carry out research tasks that might take a human expert days to complete. OpenAI's Deep Research tackles complex tasks using multi-step online research. Google's AI "co-scientist" is a more sophisticated multi-agent system that aims to help scientists generate new ideas and research proposals. Agents can do more - and get more wrong Despite the hype, AI agents come loaded with caveats. Both Anthropic and OpenAI, for example, prescribe active human supervision to minimise errors and risks. OpenAI also says its ChatGPT agent is "high risk" due to potential for assisting in the creation of biological and chemical weapons. However, the company has not published the data behind this claim so it is difficult to judge. But the kind of risks agents may pose in real-world situations are shown by Anthropic's Project Vend. Vend assigned an AI agent to run a staff vending machine as a small business - and the project disintegrated into hilarious yet shocking hallucinations and a fridge full of tungsten cubes instead of food. In another cautionary tale, a coding agent deleted a developer's entire database, later saying it had "panicked". Agents in the office Nevertheless, agents are already finding practical applications. In 2024, Telstra heavily deployed Microsoft copilot subscriptions. The company says AI-generated meeting summaries and content drafts save staff an average of 1-2 hours per week. Many large enterprises are pursuing similar strategies. Smaller companies too are experimenting with agents, such as Canberra-based construction firm Geocon's use of an interactive AI agent to manage defects in its apartment developments. Human and other costs At present, the main risk from agents is technological displacement. As agents improve, they may replace human workers across many sectors and types of work. At the same time, agent use may also accelerate the decline of entry-level white-collar jobs. People who use AI agents are also at risk. They may rely too much on the AI, offloading important cognitive tasks. And without proper supervision and guardrails, hallucinations, cyberattacks and compounding errors can very quickly derail an agent from its task and goals into causing harm, loss and injury. The true costs are also unclear. All generative AI systems use a lot of energy, which will in turn affect the price of using agents - especially for more complex tasks. Learn about agents - and build your own Despite these ongoing concerns, we can expect AI agents will become more capable and more present in our workplaces and daily lives. It's not a bad idea to start using (and perhaps building) agents yourself, and understanding their strengths, risks and limitations. For the average user, agents are most accessible through Microsoft copilot studio. This comes with inbuilt safeguards, governance and an agent store for common tasks. For the more ambitious, you can build your own AI agent with just five lines of code using the Langchain framework. (Disclaimer Statement: Daswin de Silva does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.) This article is republished from The Conversation under a Creative Commons license. Read the original article.