logo
Sundar Pichai drops major hint about Google's next CEO, says will have an extraordinary...

Sundar Pichai drops major hint about Google's next CEO, says will have an extraordinary...

India.com4 days ago

Sundar Pichai's Google gives ultimatum to its employees, says come to office for…, or quit, warning comes after…
Who's next in line to be the new Google CEO? Well, Sundar Pichai recently provided a rare update that had people wondering and guessing all throughout the tech industry. Pichai talked at a recent conference in San Francisco, and while he didn't identify his possible successor or indicate he might leave, he did lay out what the next CEO of Google should be. He explained that Google's next CEO must be someone who genuinely understands both the immense power the company holds and the deep responsibility that comes with it, especially in a world increasingly shaped by technology and artificial intelligence.
At the Bloomberg Technology Summit, Sundar Pichai shared a once-in-a-lifetime peek into his vision for the next CEO of Google. Pichai shared that he believes the next CEO has to truly care about whether the products and innovations being built at Google actually impact people — in a good way. Pichai also shared that the role of artificial intelligence at the company continues to expand, so by the time he finds his successor, they will have what he calls an 'extraordinary AI companion' that will assist them–showing the significance AI will play for Google's future.
Pichai also discussed something that many individuals are concerned about – will AI take human jobs away? He stated that he does not think so. In fact, he believes that AI will enhance how engineers work by performing the tedious and repetitive parts of their job. He stated,'It's about making people more effective, not replacing them.' He also noted that Google still intends to hire additional engineers until at least 2026.
Born on June 10, 1972, in Tamil Nadu's Madurai, Pichai did his schooling in Chennai and earned his degree from Indian Institute of Technology Kharagpur in metallurgical engineering.
The 53 -year-old holds an M.S. from Stanford University in material sciences and engineering, and an MBA from the Wharton School of the University of Pennsylvania. In 2015, Pichai was appointed as Google CEO replacing Page, who led the newly-formed Alphabet that came into existence in that year. Under his leadership, Google has been focused on developing products and services, powered by the latest advances in AI, that offer help in moments big and small.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sorry ScaleAI, Mark Zuckerberg investing billions in your company means it's 'Good Bye' from Google, Microsoft and some of the biggest technology companies
Sorry ScaleAI, Mark Zuckerberg investing billions in your company means it's 'Good Bye' from Google, Microsoft and some of the biggest technology companies

Time of India

time4 hours ago

  • Time of India

Sorry ScaleAI, Mark Zuckerberg investing billions in your company means it's 'Good Bye' from Google, Microsoft and some of the biggest technology companies

Google, the largest customer of AI data-labeling startup Scale AI, is planning to sever ties with the company following Facebook parent Meta's acquisition of a 49% stake in Scale, valued at $29 billion, according to five sources familiar with the matter. Tired of too many ads? go ad free now The move, reported by Reuters, has raised concerns among Scale's major clients, including Microsoft and Elon Musk's xAI, who are also reportedly reconsidering their partnerships due to fears of proprietary data exposure to a key rival. The shift underscores growing concerns among AI developers about data security and competitive risks as industry giants like Meta deepen their influence in the AI ecosystem. What is Meta's investment in ScaleAI Meta's $14.3 billion investment in Scale AI, previously valued at $14 billion, includes the transition of Scale's CEO, Alexandr Wang, to Meta, where he will lead efforts to develop 'superintelligence.' This has intensified worries among Scale's clients, particularly generative AI companies, that their sensitive research priorities and technical blueprints could be accessed by Meta through Scale's data-labeling operations. Google, which had planned to pay Scale $200 million this year, has already begun discussions with Scale's competitors to shift its workload, sources said. The company had been diversifying its data service providers for over a year, but Meta's investment has accelerated Google's push to exit all key contracts with Scale, a process that could move quickly due to the structure of data-labeling agreements. Microsoft and xAI are also pulling back, while OpenAI, a smaller Scale customer, scaled down its reliance months ago but will continue working with Scale as one of its many vendors, according to OpenAI's CFO. Why Google and Microsoft leaving is bad news for ScaleAI Scale AI, which serves self-driving car companies, the U.S. government, and generative AI firms, relies heavily on a few major clients. A Scale spokesperson emphasized that the company remains independent and committed to safeguarding customer data, stating, 'Our business remains strong, spanning major companies and governments.' However, the potential loss of key clients like Google could significantly impact Scale's operations.

Rajrishi Singhal: What markets demand needn't be what society wants
Rajrishi Singhal: What markets demand needn't be what society wants

Mint

time11 hours ago

  • Mint

Rajrishi Singhal: What markets demand needn't be what society wants

Apple's highly anticipated developer conference this year has not only disappointed reviewers and equity markets, but also raised many disturbing questions. One, it has raised anxieties over the future viability of the iPhone manufacturer. More importantly, it highlights the divergence between what is good for society versus what is good for the stock market, belying the benign and popular perception of the role that equity markets play in society. Apple's developer conferences, called WWDC25 this year, are usually a marquee event in the tech world, providing independent developers and tech analysts an inkling of the company's progress with hardware and software. At WWDC25 on 9 June, Apple executives were able to share only a limited future path for the company, especially on its progress with artificial intelligence (AI). Apple's senior executives told the gathering that the promise made last year about upgrading Apple Intelligence, with voice assistant Siri as its centrepiece, would take some more time to accomplish. Also Read: China plus one: Apple and India might need to woo not just Trump but Xi too This left many analysts and developers cold. Even the stock market expressed its displeasure: Apple's common stock has tanked almost 25% from its 52-week high. The market's disapproval, interestingly, is not limited to Apple's slow progress in integrating AI, but also centres around the company's data privacy policies. The market's unhappiness seems to stem from the company's reluctance to use a customer's individual usage data or information as an input for training personalized AI models. Apple instead prefers to use insights based on aggregate consumer preferences, unlike competitors like Microsoft and Google. It would then appear that the market is rewarding companies that will profit from scraping individual data, rather than businesses which have red lines on using personal data. Also Read: Apple's Hotel California trap: It can check out but not leave China The market's simple logic is that companies using personal data for their AI engines can unlock new revenue sources by offering consumers hyper-personalized offerings. Plus, there is the subscription angle. For example, individual subscription rates for Microsoft's Copilot Pro are expected to be around $20 every month, its key selling point being its ability to unlock AI features in all Microsoft 365 products, such as Word or PowerPoint, allowing customers to generate drafts, summarize content or analyse data at a faster rate. Google AI Pro will also be available to customers at the same rate, but a turbo-charged version called Google AI Ultra will be available for $250 per month. When this columnist asked Google's Gemini how Apple's future AI plans can make money for the company, one part of the AI engine's reply stood out: 'Apple's strong stance on privacy ('intelligence without surveillance') is a powerful differentiator in an era of increasing data concerns. This can foster greater trust and loyalty among its user base, leading to continued purchases of Apple products and services over competitors. While not a direct revenue stream, it's a critical factor in sustaining its high-margin business model." Also Read: Dave Lee: Apple must make peace with developers for AI success Yet, the stock market did not seem impressed. Clearly, for market operators, short-term corporate profits have greater primacy over privacy concerns. Even if we were to disregard the normative issues of morality or ethics for a moment, the stock market's responses are visibly out of sync with society's needs or concerns. There is a reason for this: the stock market is focused on the limited constituency it serves. Its behaviour aligns almost perfectly with the objectives set out by institutional investors or companies looking to raise money. In the not-too-distant past, the market was bestowing the Apple stock with a premium too because the company had plugs and ports that differed from other manufacturers, guaranteeing the company exclusive, high-margin revenues, even if that meant consumer discomfort or the exercise of near-monopolistic power. Also Read: Big Tech in the dock: The EU could force Meta and Apple to change their coercive ways The statement that the market's singular focus on corporate bottom-lines and their impact on share prices does not necessarily align with what could be good for broader society might seem like belabouring a truism; but the need to reiterate this has arisen in the face of a growing tendency to conflate a stock market's signals with the desires or ambitions of society on the whole. One good example of the market-versus-society divergence is the premium that US equity markets placed on companies setting up manufacturing bases in China. This was happening at a time when the US political class and civil society were bemoaning the absence of democracy and human rights in the North Asian country, even while hoping that closer integration with the global economy would discipline its regime. Reality has turned out otherwise, but stock markets continued to glorify companies that moved production to China. Also Read: Apple's UK run-in: Privacy may matter less to its customers than it thinks The linking of stock market performance with broader social well-being in India found fresh oxygen during the covid pandemic when benchmark indices, after initially dipping sharply, spiked with help from technology and pharma stocks. This was used by many politicians as well as Indian fund managers aligned with the ruling party's political ideology to indicate the economy's recovery and society's triumph over the virus. Planners and policymakers must realize that a healthy society's needs, desires and ambitions extend far beyond quarterly earnings and the oscillation of benchmark indices. The author is a senior journalist and author of 'Slip, Stitch and Stumble: The Untold Story of India's Financial Sector Reforms' @rajrishisinghal

New York State passes RAISE Act for frontier AI models
New York State passes RAISE Act for frontier AI models

Time of India

time13 hours ago

  • Time of India

New York State passes RAISE Act for frontier AI models

In a first of its kind, New York state lawmakers have passed the Responsible AI Safety and Education (RAISE) Act, to prevent frontier AI models by OpenAI, Google or Anthropic from contributing to disaster scenarios, including the death or injury of more than 100 people, or more than $1 billion in damages. According to a TechCrunch report, the legislation has been supported by top AI experts Geoffrey Hinton and Yoshua Bengio. If converted into a law, it would be the first set of legally mandated transparency standards for frontier AI labs. The legislation comes as a reform for the previous AI safety bill, which was ultimately vetoed. The AI safety bill targeted only large-scale models and didn't address high-risk deployment or smaller but potentially dangerous models. Play Video Pause Skip Backward Skip Forward Unmute Current Time 0:00 / Duration 0:00 Loaded : 0% 0:00 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 1x Playback Rate Chapters Chapters Descriptions descriptions off , selected Captions captions settings , opens captions settings dialog captions off , selected Audio Track default , selected Picture-in-Picture Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like War Thunder - Register now for free and play against over 75 Million real Players War Thunder Play Now Undo The Act is now awaiting the approval of New York governor Kathy Hochul, who can either sign it, send it back for amendments, or veto it. The key provisions of the proposed RAISE Act include- Live Events Requires AI labs to release safety and security reports on their frontier AI models In case of AI model behaviour or bad actors affecting the AI systems, the labs are mandated to report such safety incidents. Failure to comply with brings civil penalties up to $30 million Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories Coming to similar perspectives on India's AI ecosystem , analysts believe that while AI will be a catalyst for India's economic growth, guardrails and governance will be key to adopt the technology safely and to build resilience amid possible disruption. In a recent global survey by IBM , it was revealed that AI adoption in India is higher than in other countries. However, this is more experimentation while adoption at scale still lags. To mitigate safety risks, tech multinationals and global capability centres (GCCs) in the country have started looking for specialised AI trust and safety roles , ET reported in March . Hiring in this space has surged 36% year-on-year, and the demand for AI trust and safety professionals is expected to grow by 2530% in 2025, data from Teamlease Digital showed.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store