
Android Auto inches closer to the light with new theme options (APK teardown)
Adamya Sharma / Android Authority
TL;DR Android Auto's light theme could be coming soon, with new theming options added in the latest beta version.
Users will soon be able to choose between Automatic, Light, and Dark themes for Android Auto.
The new options will replace the current settings that only allow users to toggle the Google Maps theme between dark and light modes within Android Auto.
Android Auto is getting closer to introducing a system-wide light theme — a feature users have been waiting for a long time. In version 14.7.152544 of Android Auto, we've now spotted settings for the upcoming light theme and have more details about how it's being implemented.
Authority Insights story on Android Authority. Discover
You're reading anstory on Android Authority. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else.
An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release.
Android Auto's light theme has been in development for some time. Our previous teardown showed that Google plans to apply it consistently across apps, the navigation bar, and other parts of the Android Auto interface. The company even highlighted the light theme in a recent blog post during Google I/O 2025, held last month. However, Google has yet to officially announce a release date for the new theme. It now appears that the rollout may be imminent.
In the latest version, new theme-switching options appear in the Settings menu. These will replace the current settings that only allow users to toggle the Google Maps theme between dark and light modes within Android Auto.
Android Auto Current Google Maps Theme Settings
Android Auto New Theme Settings
Android Auto New Theme Settings
The new theme options for Android Auto include Automatic, Light, and Dark. Selecting 'Automatic' will match Android Auto's theme with the theme setting of the connected Android phone. In addition to the car dashboard, these new theme settings also replace the separate Google Maps theme options within the Android Auto settings on your phone.
Current Android Auto Settings on Android Phones
New Android Auto Settings on Android Phones Showing Theme options
New Android Auto Settings on Android Phones Showing Theme options
With the new theme settings already showing up in the latest Android Auto beta build, it's clear that Google is putting the finishing touches on the system-wide light theme. We used some tricks to enable the feature, but you might not have to wait much longer to enjoy a brighter Android Auto interface behind the wheel.
Got a tip? Talk to us! Email our staff at
Email our staff at news@androidauthority.com . You can stay anonymous or get credit for the info, it's your choice.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Wall Street Journal
36 minutes ago
- Wall Street Journal
Senate Weighs Effectively Killing Rule That Drove Rise of Fuel-Efficient Cars
The Senate is weighing a major change to federal fuel-economy rules that would kneecap the policy that dramatically reduced gas consumption and helped create fuel-efficient cars like the Toyota Prius hybrid. Republican senators are proposing a change to the Corporate Average Fuel Economy, or CAFE, rules as part of President Trump's wide-ranging tax and spending bill. If enacted, the proposal would eliminate fines for violating CAFE, all but nullifying rules that for generations have pushed automakers to churn out ever cleaner and more fuel-efficient vehicles. That technology has saved two trillion gallons of gasoline over the past 50 years, according to the journal Energy Policy.


The Verge
38 minutes ago
- The Verge
Inside the courthouse reshaping the future of the internet
The future of the internet will be determined in one building in Washington, DC — and for six weeks, I watched it unfold. For much of this spring, the E. Barrett Prettyman Courthouse in downtown Washington, DC, was buzzing with lawyers, reporters, and interested onlookers jostling between dimly lit courtrooms that hosted everyone from the richest men in Silicon Valley to fired federal workers and the DOGE-aligned officials who terminated them. The sprawling courthouse, with an airy atrium in the middle and long, dark halls that spring from it, is where cases involving government agencies often land, and that meant it was hosting two of the most consequential tech cases in the country, all while fielding a flurry of unprecedented lawsuits against President Donald Trump's administration. Between mid-April and late May, Judges James Boasberg and Amit Mehta respectively oversaw FTC v. Meta and US v. Google, a pair of long-running antitrust lawsuits that seek to split up two titans of Silicon Valley. Over the same period, several DC judges — including Boasberg — had a full docket of cases related to Trump's first 100 days in office, covering the administration's attempt to mass-deport immigrants, strip security clearance from law firms, and fire thousands of federal workers. On the first day of the Google trial, a sign with a comically contorted arrow directed visitors toward their chosen antitrust case. It was soon joined by directions to the high-profile hearing over Trump's order against law firm Jenner & Block. While the FTC's lawyers were calling witnesses against Meta in one courtroom, a nearby room was hosting arguments about whether Trump could fire two of the agency's own commissioners. My colleagues gathered around the feed waiting for a Google witness, only to see a prison-jumpsuited defendant step into the box For reporters, the weeks were an exercise in constant case-juggling. During the overlap of Google and Meta, I'd arrive to long security lines that would sometimes jut into the small park that adjoins the courthouse, waiting to hunt down a media room that streamed video for reporters and avoid the electronics-free courtrooms. I'd occasionally show up to find out no such room existed, and in a small stampede of reporters, I'd rush up a few flights of spiral stairs to the courtroom, scribbling handwritten notes from the back rows. One day, my colleagues gathered around the feed waiting for a Google witness, only to see a prison-jumpsuited defendant step into the box — in the brief moment before reporters realized Mehta was taking a quick break for a criminal hearing, they wondered which high-profile tech executive it was. The executives, for their part, were plentiful. On one day a witness box saw Meta CEO Mark Zuckerberg praising Instagram's success; a week later, former colleague and Instagram co-founder Kevin Systrom sat there describing him as a jealous boss. Google CEO Sundar Pichai would soon testify a couple floors up, followed by executives at some of Google's biggest rivals, including Microsoft and OpenAI. For all of them, the stakes were high. Judge Boasberg is tasked with determining whether Meta built an illegal monopoly by gobbling up Instagram and WhatsApp, while Judge Mehta will decide whether Google must spin off its Chrome browser or syndicate its search data. For the judges, the gauntlet seemed nothing short of exhausting. Boasberg, chief judge of the US District Court in DC, had been assigned to the Meta case long before Trump took office, but after the inauguration, he became one of the busiest judges in America — overseeing a challenge of the administration's use of the Alien Enemies Act to deport migrants, and a lawsuit over Trump's cabinet's use of encrypted messaging app Signal to communicate about attack plans. As I concluded a day of the Meta trial at 5PM, a fresh crop of reporters arrived to cover Boasberg's consideration of the Alien Enemies Act, which Trump was using to deport Venezuelan migrants to El Salvador. Outside the courtroom, Boasberg fielded attacks from Trump — who labeled him a 'Radical Left Lunatic' and a 'troublemaker and agitator' and called for his impeachment. At the Meta trial, Boasberg appeared even-keeled — sometimes to the point of boredom. He rarely mentioned the rest of his docket beyond subtle references to his overflowing schedule; his interventions were astute, signaling a deep understanding of the case. But he'd often sit with his head in his hand, only occasionally gently encouraging attorneys to move on from a particularly tedious line of questioning. He used a lunch break in the Meta trial to file one of the most scathing legal rulings of the early Trump administration, accusing the administration of 'willful disregard' for his temporary restraining order on deportation flights to El Salvador, with 'probable cause' to find it in criminal contempt. By the Meta trial's end in late May, Boasberg sounded relieved as the final day wrapped. 'I will take a welcome respite from thinking about this between now and when the first brief is due,' he told the attorneys. In 1998, the E. Barrett Prettyman courthouse played host to another tech giant fighting for its life: Microsoft. US v. Microsoft was a landmark monopoly case that determined the company had illegally wielded its dominance over Intel-compatible PC operating systems to tamp down threats to its monopoly, including up-and-coming web browsers like Netscape. But in the wake of that case and subsequent settlement, regulators took a hands-off approach to the next generation of tech companies. It would take two decades for the government to return to the battleground — until 2020, when the cases against Meta and Google were filed. The search and social networking landscape has changed dramatically in the last five years, with the rise of TikTok and generative AI. But so too has the zeitgeist around tech. As Silicon Valley remains politically embattled, the goal of more aggressive antitrust enforcement has won bipartisan support. At the same time, there's a growing fear of foreign competition, particularly from TikTok, which appeared in the very same courthouse last year to argue against a (since-delayed) nationwide ban. The company found itself back there as a witness during Meta's trial, where lawyers confronted a TikTok executive with statements made during its failed 2024 fight. Those weeks of courthouse testimony helped illuminate countless decisions that made the tech world as we know it Inside the courthouse, it was easy to forget about everything else going on in Washington — until it wasn't. I was removed from the day-to-day antics of Elon Musk's Department of Government Efficiency (DOGE) hacking away at the federal workforce, but the cases about its handiwork — including gutting the Consumer Financial Protection Bureau (CFPB) — kept winding through court. During a break on the fourth day of Meta's trial and days before the start of Google's, I got a New York Times push notification walking back from the bathroom, telling me Virginia Judge Leonie Brinkema had ruled against Google in the DOJ's separate ad-tech antitrust case. I hustled back to the media room and found several of my colleagues from other outlets already in the hallway writing up their stories. Of course, we commiserated, a decision we expected months ago would drop right now. Rulings in this spring's Google and Meta trials will likely take months to arrive, and their fallout probably won't be seen for years. But those weeks of courthouse testimony helped illuminate countless decisions that made the tech world as we know it. During the early 2010s, Facebook executives expressed fears that Google might buy WhatsApp and bundle it with Android, giving itself a stranglehold over mobile messaging. With the context of the Google trial, that fear looks prescient — the company cemented its search dominance by making Android phone makers preinstall its search engine in the same way. It's also possible to see the shape of giants yet to rise. Should Judge Mehta order Google to sell Chrome, several witnesses said they'd be more than happy to buy it, including Yahoo, Perplexity, and OpenAI. The Justice Department's landmark antitrust trial against Microsoft is widely credited with opening up the tech industry for innovative players like Google, and a quarter-century later, there's hope something similar could happen for new companies today. Yet it seems equally possible that in another decade or two, we'll be back in this same courthouse, hearing the government argue they've nailed the doors shut once again.


The Verge
38 minutes ago
- The Verge
The music industry is building the tech to hunt down AI songs
The music industry's nightmare came true in 2023, and it sounded a lot like Drake. 'Heart on My Sleeve,' a convincingly fake duet between Drake and The Weeknd, racked up millions of streams before anyone could explain who made it or where it came from. The track didn't just go viral — it broke the illusion that anyone was in control. In the scramble to respond, a new category of infrastructure is quietly taking shape that's built not to stop generative music outright, but to make it traceable. Detection systems are being embedded across the entire music pipeline: in the tools used to train models, the platforms where songs are uploaded, the databases that license rights, and the algorithms that shape discovery. The goal isn't just to catch synthetic content after the fact. It's to identify it early, tag it with metadata, and govern how it moves through the system. 'If you don't build this stuff into the infrastructure, you're just going to be chasing your tail,' says Matt Adell, cofounder of Musical AI. 'You can't keep reacting to every new track or model — that doesn't scale. You need infrastructure that works from training through distribution.' The goal isn't takedowns, but licensing and control Startups are now popping up to build detection into licensing workflows. Platforms like YouTube and Deezer have developed internal systems to flag synthetic audio as it's uploaded and shape how it surfaces in search and recommendations. Other music companies — including Audible Magic, Pex, Rightsify, and SoundCloud — are expanding detection, moderation, and attribution features across everything from training datasets to distribution. The result is a fragmented but fast-growing ecosystem of companies treating the detection of AI-generated content not as an enforcement tool, but as table-stakes infrastructure for tracking synthetic media. Rather than detecting AI music after it spreads, some companies are building tools to tag it from the moment it's made. Vermillio and Musical AI are developing systems to scan finished tracks for synthetic elements and automatically tag them in the metadata. Vermillio's TraceID framework goes deeper by breaking songs into stems — like vocal tone, melodic phrasing, and lyrical patterns — and flagging the specific AI-generated segments, allowing rights holders to detect mimicry at the stem level, even if a new track only borrows parts of an original. The company says its focus isn't takedowns, but proactive licensing and authenticated release. TraceID is positioned as a replacement for systems like YouTube's Content ID, which often miss subtle or partial imitations. Vermillio estimates that authenticated licensing powered by tools like TraceID could grow from $75 million in 2023 to $10 billion in 2025. In practice, that means a rights holder or platform can run a finished track through TraceID to see if it contains protected elements — and if it does, have the system flag it for licensing before release. 'We're trying to quantify creative influence, not just catch copies.' Some companies are going even further upstream to the training data itself. By analyzing what goes into a model, their aim is to estimate how much a generated track borrows from specific artists or songs. That kind of attribution could enable more precise licensing, with royalties based on creative influence instead of post-release disputes. The idea echoes old debates about musical influence — like the 'Blurred Lines' lawsuit — but applies them to algorithmic generation. The difference now is that licensing can happen before release, not through litigation after the fact. Musical AI is working on a detection system, too. The company describes its system as layered across ingestion, generation, and distribution. Rather than filtering outputs, it tracks provenance from end to end. 'Attribution shouldn't start when the song is done — it should start when the model starts learning,' says Sean Power, the company's cofounder. 'We're trying to quantify creative influence, not just catch copies.' Deezer has developed internal tools to flag fully AI-generated tracks at upload and reduce their visibility in both algorithmic and editorial recommendations, especially when the content appears spammy. Chief Innovation Officer Aurélien Hérault says that, as of April, those tools were detecting roughly 20 percent of new uploads each day as fully AI-generated — more than double what they saw in January. Tracks identified by the system remain accessible on the platform but are not promoted. Hérault says Deezer plans to begin labeling these tracks for users directly 'in a few weeks or a few months.' 'We're not against AI at all,' Hérault says. 'But a lot of this content is being used in bad faith — not for creation, but to exploit the platform. That's why we're paying so much attention.' Spawning AI's DNTP (Do Not Train Protocol) is pushing detection even earlier — at the dataset level. The opt-out protocol lets artists and rights holders label their work as off-limits for model training. While visual artists already have access to similar tools, the audio world is still playing catch-up. So far, there's little consensus on how to standardize consent, transparency, or licensing at scale. Regulation may eventually force the issue, but for now, the approach remains fragmented. Support from major AI training companies has also been inconsistent, and critics say the protocol won't gain traction unless it's governed independently and widely adopted. 'The opt-out protocol needs to be nonprofit, overseen by a few different actors, to be trusted,' Dryhurst says. 'Nobody should trust the future of consent to an opaque centralized company that could go out of business — or much worse.'