
Can't change smart light colors with Google Assistant? You're not alone
Edgar Cervantes / Android Authority
TL;DR Voice commands for changing light colors are not functioning as expected for several Google Home users.
Google Assistant reportedly responds that the lights are offline, even though changing colors works fine through the app.
Google is aware of the issue and is actively looking into the matter.
If you've been experiencing issues changing the color of your smart lights using voice commands, you're not alone. Many Google Home users have recently reported a bug affecting this functionality, but Google is already on the case.
Instead of changing colors as expected, Google Assistant responds that the connected lights are offline, according to reports on Reddit and the Google Nest community forums. However, the lights appear online in both the Google Home and OEM apps, and users can change colors manually without problems.
This problem is not limited to smart lights from any particular brand. User reports indicate that the issue affects lights from Philips Hue, WiZ, Xiaomi, Yeelight, and Tuya, suggesting that a bug in Google Assistant may be to blame. Interestingly, the bug does not affect the ability to change the brightness or turn the lights on or off.
Thankfully, Google has already acknowledged the issue and is 'actively looking into it.' However, the company has yet to reveal the underlying cause or share details about a potential fix. We expect the company to share more details once it identifies the root cause. In the meantime, you'll have to stick to using the Google Home or OEM-specific companion apps to change the colors of your smart light.
Got a tip? Talk to us! Email our staff at
Email our staff at news@androidauthority.com . You can stay anonymous or get credit for the info, it's your choice.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
24 minutes ago
- Tom's Guide
Google just launched a new AI tool for developers — here's why it matters to everyone else
Google just launched a new AI tool called Gemini CLI; and while it's designed for developers, it could lead to smarter, more flexible AI tools for everyone else. In simple terms, Gemini CLI lets people run Google's powerful Gemini AI model right from their computer's command line. For those that don't know, the "command line" (or terminal) is a tool that lets you type instructions directly to your computer; instead of clicking buttons or using apps. It looks like a plain black-and-white window where you type commands to make things happen. You've probably seen it before and may not have known the name of it. Developers and power users often use the command line because it's fast, flexible, and lets them automate tasks or control their system more precisely than with regular apps. And while all of this might sound just a little bit too technical for the casual user, the bigger picture is this: by making Gemini more open and customizable, Google is giving developers new ways to build creative AI tools; everyday users will likely benefit down the line. Gemini CLI lets users bring Google's latest Gemini AI model — Gemini 2.5 Pro — into their terminal, with full support for writing and debugging code, automating tasks, generating content and integrating AI into custom workflows. It's free, open-source, and comes with generous usage limits: up to 1,000 requests per day, no API key required. Open-source means the software's code is made public, so anyone can view it, use it or modify it and it's typically free to do so (ChatGPT is an exception). Get instant access to breaking news, the hottest reviews, great deals and helpful tips. For example, if a tool is open-source, developers around the world can improve it, fix bugs or build their own versions with it. Being open-source also means you can see exactly how the software works. In other words, it's not a 'black box' controlled only by the company that made it. This latest development shows Google's AI strategy is shifting toward open access and customization. By releasing Gemini CLI as an open-source tool (under Apache 2.0), Google is inviting developers everywhere to build new ways to use Gemini — not just through official apps, but through personalized tools and scripts. In short: expect to see a wave of new Gemini-powered tools to emerge in the coming months; many created by the community, not by Google alone. Whether you use AI for productivity, creativity or problem-solving, this kind of open access helps the ecosystem grow faster, and potentially leads to more useful options for all users. Even if you never touch the terminal, Gemini CLI is a clear sign that Google is pushing to make its AI tools more open, flexible and customizable. That means more developers (and hobbyists) will be able to build creative new ways to use Gemini, and going beyond official Google apps. In the coming months, we'll likely see more community-built tools, scripts, and AI-powered shortcuts start to surface, making it easier for everyone to take advantage of AI in new and unexpected ways.


Gizmodo
35 minutes ago
- Gizmodo
Google to Gemini Users: We're Going to Look at Your Texts Whether You Like It or Not
The idea of agentic AI is pretty appealing. Like, sure, I want AI to order my Uber, summarize my calendar, or buy cat food when I run out so my little monster doesn't go hungry, but all of those very simple things actually involve a great deal of trust. You need to be okay with handing over your payment information, your day-to-day activities, and little pieces of your life that might otherwise be totally private. That required level of intimacy is why it's important to be able to opt out of using agentic AI, and also the reason why people are so concerned by Google's recent messaging on that front. As highlighted in a Reddit post, Google recently sent out an email to some Android users informing them that Gemini will now be able to 'help you use Phone, Messages, WhatsApp, and Utilities on your phone whether your Gemini Apps Activity is on or off.' That change, according to the email, will take place on July 7. In short, that sounds—at least on the surface—like whether you have opted in or out, Gemini has access to all of those very critical apps on your device. Google continues in the email, which was screenshotted by Android Police, by stating that 'if you don't want to use these features, you can turn them off in Apps settings page,' but doesn't elaborate on where to find that page or what exactly will be disabled if you avail yourself of that setting option. Notably, when App Activity is enabled, Google stores information on your Gemini usage (inputs and responses, for example) for up to 72 hours, and some of that data may actually be reviewed by a human. That's all to say that enabling Gemini access to those critical apps by default may be a bridge too far for some who are worried about protecting their privacy or wary of AI in general. Gizmodo has reached out to Google for further clarification on what data may be collected by default as a result of this change. We'll update this story if we receive a response. Whether there are additional privacy concerns with this specific change or not, the fact that Gemini needs access to some of your most personal information to realize the vision of agentic AI kind of says everything you need to know. As generative AI and chatbots get integrated deeper and deeper into our phones, we'll have to have a sober conversation about just when and where we're okay with data being collected. To me, it's like the conversation about voice assistants all over again, except somehow even more fraught and pervasive. The worst part is, if we're not careful, all of that information might end up being collected without our consent, or at least without our knowledge. I don't know about you, but as much as I want AI to order me a cab, I think keeping my text messages private is a higher priority.

Hypebeast
38 minutes ago
- Hypebeast
Why Google Bought a $100 Million Stake in Gentle Monster
Multiple sources have reported thatGooglehas invested approximately $100 million USD for a 4% stake in the fast-growing Korean eyewear labelGentle Monster. The news comes one month after the tech giant revealed partnerships with Gentle Monster andWarby Parkerto develop design-forward smart lenses for its newAndroid XRinitiative. Google's latest move sends a strong message to competitorMeta, which has ramped up its AI-powered glasses rollout with partnersRay BanandOakley. Meta unveiled its partnership with Ray-Ban in 2023 and has since launched multiple models, including the recentOrion glasses, touted as 'the most advanced pair of augmented reality (AR) glasses ever made.' And just days ago,Oakley and Meta unveiled the HSTN, their first product from a new long-term partnership in 'a new category of Performance AI glasses.' However, it's not the first time Google has attempted to enter the smart glasses market. In 2012, Google unveiled its now obsolete Google Glass device, equipped with a camera, a small information display, and internet than 10 years later, the growth of augmented reality (AR) technologies and highly advanced AI has paved the way for more user-friendly and convenient wearables. Additionally, hardware advances have allowed new designs to accommodate slimmer profiles and more attractive silhouettes. In December, Google announced its revamped efforts withAndroid XR, 'a new operating system built for the next generation of computing,' focused on bringing heightened experiences to headsets and glasses. Fast forward to late May, Gentle Monster took to Instagram to share the news of theirpartnership, saying the collaboration 'represents a pivotal step in the evolution of smart eyewear into essential, lifestyle fashion items.' 'Creativity and sophistication are essential design features for the integration of technology into everyday life,' the brand added. So, what could Google x Gentle Monster smart glasses look like? Compared to Ray-Ban, Oakley, and fellow Android XR partner Warby Parker, Gentle Monster is the youngest eyewear label, and the most experimental. Founded in Seoul in 2011, the brand has garnered international appeal for its trend-driven silhouettes and collaborations with innovative designer brands likeMaison MargielaandMugler. A pair of Gentle Monster smart specs could bring big tech's dream of merging its services with fashion's cultural appeal into reality like never before, paving the way for wearable concepts beyond eyewear altogether. As of the time of writing, an official launch of Gentle Monster and Google's collaboration has not been confirmed. Stay tuned to Hypebeast for the latest fashion and tech industry insights.