
Pixel Buds 2a could launch in four colours, Pro 2 gets ‘Sterling' finish: Leak
Among the most notable leaks are new audio accessories, expectedly expanded colour palettes, and larger storage configurations.
According to information shared by a reliable tipster @MysteryLupin, the affordable Pixel Buds 2a are likely to arrive in four colours: Hazel (green), Strawberry (red), Iris (purple), and Fog Light (light blue), reported the 9To5Google.
These second-generation A-Series earbuds could follow the 2021 debut of the original Pixel Buds A-Series, which offered consumers a budget-friendly alternative in Google's audio lineup.
In addition, the Pixel Buds Pro 2 are expected to introduce a new shade called Sterling, likely a sleek grey tone designed to complement the upcoming Pixel 10 Pro and Pro XL models.
The leak also details storage and colour variants for the upcoming Pixel 10, Pixel 10 Pro, Pro XL, and Pixel 10 Pro Fold devices. The report hints that Pixel 10 will likely be available in 128 GB and 256 GB variants, with colour options including Black, Obsidian, Blue, Frost, Purple, Indigo, Yellow, and Lemongrass.
Moreover, the Pixel 10 Pro is expected to come in 128 GB, 256 GB, 512 GB, and 1 TB models. Colour choices could span Black, Obsidian, Green, Jade, Grey, Moonstone, White and Porcelain. The larger Pixel 10 Pro XL is anticipated to be offered in 256 GB, 512 GB, and 1 TB capacities, with the same colour range as the standard Pro model. Notably, the Pixel 10 Pro Fold is also tipped to launch in 256 GB, 512 GB, and 1 TB variants, available in Grey, Moonstone, Green, and Jade.
Rounding off the leak is a glimpse at new charging peripherals. A product referred to as the 'Google Pixel Charger' could reportedly arrive in a Rock Candy (white) finish, potentially echoing Apple's MagSafe puck in form and function.
Alongside it, the 'Google Pixel Wireless Charger' is expected to serve as the successor to the current Pixel Stand, hinting at faster wireless charging speeds and enhanced integration with Pixel phones. This may also align with the previously rumoured 'Pixelsnap Charger with Stand,' suggesting a refined approach to desktop wireless charging solutions.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
ChatGPT making us dumb & dumber, but we can still come out wiser
Claude Shannon, one of the fathers of AI, once wrote rather disparagingly: 'I visualize a time when we will be to robots what dogs are to humans, and I'm rooting for the machines.' As we enter the age of AI — arguably, the most powerful technology of our times — many of us fear that this prophecy is coming true. Powerful AI models like ChatGPT can create complex essays, poetry and pictures; Google's Veo stitches together cinema-quality videos; Deep Research agents produce research reports at the drop of a prompt. Our innate human abilities of thinking, creating, and reasoning seem to be now duplicated, sometimes surpassed, by AI. This seemed to be confirmed by a recent — and quite disturbing — MIT Media Lab study, 'Your Brain on ChatGPT'. It suggested that while AI tools like ChatGPT help us write faster, they may be making our minds slower. Through a four-month meticulously executed experiment with 54 participants, researchers found that those who used ChatGPT for essay writing exhibited up to 55% lower brain activity, as measured by EEG signals, compared to those who wrote without assistance. If this was not troubling enough, in a later session where ChatGPT users were asked to write unaided, their brains remained less engaged than people without AI ('brain-only' participants, as the study quaintly labelled them). Memory also suffered — only 20% could recall what they had written, and 16% even denied authorship of their own text! The message seemed to be clear: outsourcing thinking to machines may be efficient, but it risks undermining our capacity for deep thought, retention, and ownership of ideas. Technology has always changed us, and we have seen this story many times before. There was a time when you remembered everyone's phone numbers, now you can barely recall your family's, if that. You remembered roads, lanes and routes; if you did not, you consulted a paper map or asked someone. Today, Google and other map apps do that work for us. Facebook reminds us of people's birthdays; email answers suggest themselves, sparing us of even that little effort of thinking. When autonomous cars arrive, will we even remember how to drive or just loll around in our seats as it takes us to our destination? Jonathan Haidt, in his 'The Anxious Generation,' points out how smartphones radically reshaped childhood. Unstructured outdoor play gave way to scrolling, and social bonds turned into notifications. Teen anxiety, loneliness, and attention deficits all surged. From calculators diminishing our mental arithmetic, to GPS weakening our spatial memory, every tool we invent alters us — subtly or drastically. 'Do we shape our tools, or do our tools shape us?' is a quote commonly misattributed to Marshall McLuhan but this question is hauntingly relevant in the age of AI. If we let machines do the thinking, what happens to our human capacity to think, reflect, reason, and learn? This is especially troubling for children, and more so in India. For one, India has the highest usage of ChatGPT globally. Most of it is by children and young adults, who are turning into passive consumers of AI-generated knowledge. Imagine a 16-year-old using ChatGPT to write a history essay. The output might be near-perfect, but what has she actually learned? The MIT study suggests — very little. Without effortful recall or critical thinking, she might not retain concepts, nor build the muscle of articulation. With exams still based on memory and original expression, and careers requiring problem-solving, this is a silent but real risk. The real questions, however, are not whether the study is correct or is exaggerating, or whether AI is making us dumber or not, but what can we do about it. We definitely need some guardrails and precautions, and we need to start building them now. I believe that we should teach ourselves and our children to: Ask the right questions: As answers become commodities, asking the right questions will be the differentiator. We need to relook at our education system and pedagogy and bring back this unique human skill of curiosity. Intelligence is not just about answers. It is about the courage to think, to doubt, and to create Invert classwork and homework: Reserve classroom time for 'brain-only' activities like journaling, debates, and mental maths. Homework can be about using AI tools to learn what will be discussed in class the next day. AI usage codes: Just as schools restrict smartphone use, they should set clear boundaries for when and how AI can be used. Teacher-AI synergy: Train educators to use AI as a co-teacher, and not a crutch. Think of AI as Augmented Intelligence, not an alternative one. Above all, make everyone AI literate: Much like reading, writing, and arithmetic were foundational in the digital age, knowing how to use AI wisely is the new essential skill of our time. AI literacy is more than just knowing prompts. It means understanding when to use AI, and when not to; how to verify AI output for accuracy, bias, and logic; how to collaborate with AI without losing your own voice, and how to maintain cognitive and ethical agency in the age of intelligent machines. Just as we once taught 'reading, writing, adding, multiplying,' we must now teach 'thinking, prompting, questioning, verifying.' History shows that humans adapt. The printing press did not destroy memory; calculators did not end arithmetic; smartphones did not abolish communication. We evolved with them—sometimes clumsily, but always creatively. Today, with AI, the challenge is deeper because it imitates human cognition. In fact, as AI challenges us with higher levels of creativity and cognition, human intelligence and connection will become even more prized. Take chess: a computer defeated Gary Kasparov in chess back in 1997; since then, a computer or AI can defeat any chess champion hundred times out of hundred. But human 'brains-only' chess has become much more popular now, as millions follow D Gukesh's encounters with Magnus Carlsen. So, if we cultivate AI literacy and have the right guardrails in place; if we teach ourselves and our children to think with AI but not through it, we can come out wiser, not weaker. Facebook Twitter Linkedin Email Disclaimer Views expressed above are the author's own.


India.com
3 hours ago
- India.com
Rs 86000000 in salary: Google, Meta, and OpenAI ready to offer huge money for people with talent in...
Rs 86000000 in salary: Google, Meta, and OpenAI ready to offer huge money for people with talent in… Top Tier Talent Salary: In order to hire special talents, major companies around the world are making changes in their salary structure. Tech giant Google has also made major changes in the way the company pays salaries. It has made changes in the salary to attract good and talented employees. These moves are necessary to stay ahead in the ongoing competition in the field of Artificial Intelligence. Not only Google and Meta but also OpenAI are also offering huge salary packages to talented employees. As per a report by Business Insider, citing US Department of Labour documents, software engineers at Google can get a basic salary of USD34,0000 (approx Rs 3 crore). Apart from the basic salary, the company will also give its shares and bonuses, further increasing the total income. Notably, positions such as Product managers, AI researchers and people working in other technical positions are getting impressive salary packages. Meta Is Offering Huge Salary Google is currently facing a very tough competition from other tech giants such as Meta and OpenAI. These companies are also luring good talents in AI by offering them huge salaries. Meta has also invested huge amount in AI and now hiring AI researchers and engineers to power its Generative AI and Reality Labs division. Meta's significant investment in advanced AI in 2023 is reflected in high salaries for its senior AI researchers, ranging from USD600,000 to USD1 million per anum. The salary also includes bonuses and stock options. OpenAI, backed by Microsoft, also offers competitive packages for senior research engineers. These packages range from USD200,000 to USD370,000 in base salary, reaching USD800,000 to USD1 million with equity and profit-sharing incentives. Why Is There A Salary Increase? The tech giants are increasing the salary packages because they want to keep employees who are capable of enhancing large language models, improving generative AI tools, and developing new technologies. According to experts, these high salary packages are not just for new hires but are also to retain good employees in the company.


NDTV
5 hours ago
- NDTV
All About Varun Mohan, The Windsurf CEO Hired By Google To Strengthen Its AI
Google has hired a few team members from Windsurf, a rising AI infrastructure startup. The move is aimed at strengthening the tech behemoth's own AI code generation capabilities. At the centre of this development is Varun Mohan, the co-founder and CEO of Windsurf. Mr Mohan, along with co-founder Douglas Chen, joins Google following a high-stakes corporate tug-of-war that reportedly saw OpenAI in advanced talks to acquire Windsurf just last month. Who is Varun Mohan, and what makes his background significant in the AI space? Varun Mohan, who has Indian roots, was raised in Sunnyvale, California, by parents who migrated from India. He attended The Harker School in San Jose before moving on to the Massachusetts Institute of Technology (MIT), where he earned both a Bachelor's and Master of Engineering degree in Computer Science. According to his LinkedIn profile, Mr Mohan co-founded Windsurf in June 2021. Based in Mountain View, the startup is known for developing AI tools that streamline software development workflows. Under his leadership, Windsurf quickly rose to prominence as one of the most promising names in AI infrastructure. Before founding Windsurf, he spent over three years at Nuro, the autonomous vehicle company. He joined in early 2018 as a software engineer. By August 2019, he was the tech lead for autonomy infrastructure and later became tech lead manager in 2020. His resume also includes engineering internships at some of Silicon Valley's most recognised firms. In 2017, he interned at Databricks, focusing on machine learning systems. Earlier that year, he worked at Cloudian Inc., where he helped develop S3 storage infrastructure. In 2016, he was a software engineering intern at Quora, contributing to its data infrastructure. How Varun Mohan ended up at Google Windsurf reportedly caught the attention of OpenAI, which was in advanced talks to acquire the startup as recently as June 2025. But Google managed to pull off a strategic coup by hiring its core team, including Mr Mohan and Mr Chen, blocking OpenAI's bid. According to Reuters, Google is paying $2.4 billion in licensing fees to access certain WindSurf technologies under non-exclusive terms, as confirmed by a person familiar with the deal. Google will not acquire any equity or controlling interest in the company. The former Windsurf team will now join Google DeepMind, where they'll focus on agentic coding initiatives, particularly within the Gemini project.