logo
Google revamps salary structure; to boost bonus for top performers

Google revamps salary structure; to boost bonus for top performers

Google is making changes to its compensation structure in a bid to incentivise higher performance from staff. Google is set to shift its compensation strategy to more generously reward high-achieving staff with increased bonuses and equity awards, while scaling back payouts for those with lower ratings, Business Insider reported.
The new compensation structure at Google will take effect in time to influence 2025 end-of-year reviews and 2026 compensation.
The overhaul was outlined in an internal email titled 'Strengthening our performance culture', sent by John Casey, Google's vice-president of global compensation and benefits, as cited in the report. Casey emphasised the need to align rewards more closely with impact, stating, 'High performance is more important than ever to achieve the goals we've set.'
"This means more Googlers will have the opportunity to achieve that rating during annual reviews, and their bonus and equity award will be modelled using the O's individual multiplier in 2026," he wrote in the email, as seen by Business Insider.
Google's GRAD employee review system
Central to the changes is Google's internal review system, Google Reviews and Development (GRAD), which the tech giant uses once a year to rate the performance of its employees. The system categorises employees across five tiers, from 'Not Enough Impact' to the top rating of 'Transformative Impact'. Most employees typically receive a 'Significant Impact' score, considered a strong rating.
The ranking a Google employee is given usually determines their bonus and equity.
Under the updated structure, managers will now be able to award the 'Outstanding Impact' rating to a larger number of employees.
New budget set aside to reward employees
In addition, discretionary budgets for managers will be increased to enable greater rewards for high performers within the 'Significant Impact' bracket.
To balance the changes, Google will reduce bonus and equity multipliers slightly for staff rated in the 'Significant Impact' and 'Moderate Impact' tiers. However, Casey assured staff that "Significant Impact will remain a strong rating — achieving it will still get you more than your target bonus".
'These changes are budget-neutral,' he added, 'and overall we're continuing to invest in comprehensive and highly competitive compensation and benefits.'
The company is doubling down on rewarding excellence to maintain its growth momentum.
The policy shift comes amid a wider trend in the tech industry, with peers such as Microsoft and Meta increasing performance pressure in pursuit of operational efficiency. While Google has not resorted to large-scale layoffs, as seen at Meta, the company is pushing for higher productivity through financial incentives.
Also Read:

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Pixel 10 could borrow this trick from iPhone's playbook and offer brand-new accessories
Google Pixel 10 could borrow this trick from iPhone's playbook and offer brand-new accessories

Hindustan Times

time35 minutes ago

  • Hindustan Times

Google Pixel 10 could borrow this trick from iPhone's playbook and offer brand-new accessories

Google Pixel 10 launch is not too far away with June already here. The device is expected to feature multiple upgrades in terms of processing power, with the Pixel 10 anticipated to feature the Tensor G5 based on TSMC's architecture instead of Samsung's. The standard vanilla model is also expected to get upgrades in the form of more cameras. However, there is one upgrade that Google could borrow from Apple's playbook: support for magnetic charging and magnetic accessories, just like Apple's MagSafe. As reported by Android Authority, the publication was able to obtain marketing material which suggests that Google could be working on magnetic Qi2 accessories for the Pixel 10 series. Also Read: Nintendo Switch 2 is a global hit, but how do you buy one in India? - Buying guide The report states that Google is planning an ecosystem of magnetic accessories called PixelSnap. So far, three accessories could be in the works, including a PixelSnap charger, a PixelSnap charger with a stand, and a PixelSnap ring stand. The report further mentions that the Pixel 10 series could support the Qi2 wireless charging standard, which could allow for 60W wireless charging. Considering many Android players, including OnePlus, Oppo, and more, have started to offer magnetic accessories, it makes sense for Google to finally take charge of things as well. MagSafe has been a popular accessory category for Apple, with its official cases and accessories supporting it. This has allowed for a widespread ecosystem of accessories, including MagSafe wallets, chargers, and even power banks. Furthermore, it has also been integrated into car accessories, where you can simply attach your phone magnetically. If the Pixel 10 series does debut this feature, it would make for an interesting addition and bring it more in line with what the iPhone has to offer. It will certainly ramp up the competition and make for a much more well-rounded accessory ecosystem for the Google Pixel phones. If you are an older Pixel device owner, you can get much of the same functionality by buying magnetic cases. Makers like Spigen already offer MagFit cases for the Pixel 9 lineup, allowing you to attach your MagSafe wallets and more.

CoreWeave to offer compute capacity in Google's new cloud deal with OpenAI
CoreWeave to offer compute capacity in Google's new cloud deal with OpenAI

Time of India

timean hour ago

  • Time of India

CoreWeave to offer compute capacity in Google's new cloud deal with OpenAI

CoreWeave has emerged as a winner in Google's newly signed partnership with OpenAI , sources familiar with the matter told Reuters, in the latest example of the voracious appetite for computing resources in the artificial-intelligence industry and the formation of new alliances to meet them. The so-called neocloud company, which sells cloud computing services built on Nvidia 's graphics processing units, is slated to provide computing capacity to Google's cloud unit, and Alphabet's Google will then sell that computing capacity to OpenAI to meet the growing demand for services like ChatGPT, the sources said. Google will also provide some of its own computing resources to OpenAI, added the sources, who requested anonymity to discuss private matters. The details of the arrangement, first reported by Reuters on Tuesday, highlight the evolving dynamics between hyperscalers like Microsoft and Google and so-called neocloud companies like Coreweave. Hyperscalers are large cloud service providers that offer massive-scale data centres and cloud infrastructure. The insatiable hunger for computing resources has generated major investment commitments and turned rivals into partners. Backed by OpenAI and Nvidia, Coreweave signed up Google as a customer in the first quarter. CoreWeave, Google and OpenAI declined to comment. CoreWeave, a specialized cloud provider that went public in March, has already been a major supplier of OpenAI's infrastructure. It has signed a five-year contract worth $11.9 billion with OpenAI to provide dedicated computing capacity for OpenAI's model training and inference. OpenAI also took a $350 million equity stake in CoreWeave in March. This partnership was further expanded last month through an additional agreement worth up to $4 billion, extending through April 2029, underscoring OpenAI's escalating demand for high-performance computing resources. Industry insiders say adding Google Cloud as a new customer could help CoreWeave diversify its revenue sources, and having a credible partner with deep pockets like Google enables the startup to secure more favorable financing terms to support ambitious data centre buildouts across the country. This could also boost Google's cloud unit, which generated $43 billion in sales last year, allowing it to capitalize on the growth of OpenAI, which is also one of its largest competitors in areas like search and chatbots. It positions Google as a neutral provider of computing resources in competition with peers such as Amazon and Microsoft. CoreWeave's deal with Google coincides with Microsoft's re-evaluation of its data centre strategy, including withdrawing from certain data centre leases. Microsoft, once Coreweave's largest customer, accounting for about 62% of its 2024 revenue, is also renegotiating with OpenAI to revise the terms of their multibillion-dollar investment, including the future equity stake it will hold in OpenAI. CoreWeave, backed by Nvidia, has established itself as a fast-rising provider of GPU-based cloud infrastructure in the AI wave. While its public debut in March was met with a lukewarm response due to concerns over its highly leveraged capital structure and shifting GPU demand, the company's stock has surged since its IPO price of $40 per share, gaining over 270% and reaching a record high of $166.63 in June.

Meta unveils AI that thinks and sees the world like humans
Meta unveils AI that thinks and sees the world like humans

India Today

timean hour ago

  • India Today

Meta unveils AI that thinks and sees the world like humans

Meta has introduced a new artificial intelligence model called V-JEPA 2, which can seemingly help AI agents better understand and predict the real world – much like how humans observe, think, and plan before taking any action. According to Meta, this new open-source AI model is a big step towards developing what it calls advanced machine intelligence (AMI). AMI is Meta's vision for the future. It's an AI model that can not only process data but also learn from its surroundings and predict how things will change – just like humans do every calls V-JEPA 2 its most sophisticated world model to date. V-JEPA 2 stands for Video Joint Embedding Predictive Architecture 2. The model is primarily trained on vast amounts of video footage. The company explains that by watching a huge number of video clips – over a million hours – this AI learnt how people interact with objects, how things move, and how different actions affect the world around them. And with this training, AI can further enable robots and AI systems to anticipate how objects behave, how environments respond to motion, and how different elements interact physically.'As humans, we have the ability to predict how the physical world will evolve in response to our actions or the actions of others,' Meta said in its official blog post. 'V-JEPA 2 helps AI agents mimic this intelligence, making them smarter about the physical world.'advertisement Giving an example Meta explains that just as a person knows a tennis ball will fall back down if thrown into the air, V-JEPA 2 can learn this kind of common-sense behaviour by observing video. This training with video and world understanding further helps AI develop a mental map or understanding of how the physical world makes Meta's V-JEPA 2 different?V-JEPA 2 is a 1.2 billion-parametre model that builds on its predecessor V-JEPA, which Meta unveiled last year. This new generation is said to offer significant improvements in understanding, predicting, and planning. The company emphasises that, unlike previous systems, V-JEPA 2 is not just capable of recognising images or responding to commands, but it can actually make predictions. It can look at a situation and estimate what will happen next if a certain action is taken. These capabilities, according to Meta, are essential for AI to function autonomously in real-world settings. For instance, this could allow a robot to navigate unfamiliar terrain or manipulate objects it has never seen reveals that it has also tested this by putting the AI model into robots in its labs. During testing, the company claims these robots were able to complete basic tasks like picking up unfamiliar objects and placing them in new spots – even in environments the robot had never seen before. The robot used the model to plan its next move based on its current view and a goal image. It then chose the best action to take, step by support of the broader research community, Meta is also releasing three new benchmarks to evaluate how well AI models learn and reason from video. These benchmarks aim to standardise the way researchers test world models, offering a clearer path towards advancing physical reasoning in AI.'By sharing this work, we aim to give researchers and developers access to the best models and benchmarks to help accelerate research and progress – ultimately leading to better and more capable AI systems that will help enhance people's lives,' said while the company is currently focusing on short tasks like picking and placing objects, Meta says it wants to go further – developing models that can plan long-term, break down complex tasks into smaller steps, and even use senses like touch and sound in the future.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store