logo
Did you spot these unannounced Pixel features in Google's big Android Show reveal?

Did you spot these unannounced Pixel features in Google's big Android Show reveal?

Adamya Sharma / Android Authority
TL;DR Google's Material 3 Expressive preview quietly reveals two upcoming Pixel features: AOD wallpapers and a redesigned lock screen with a compact notification shelf.
Always On Display support for wallpapers was last seen on the Pixel 3, making this its first appearance since.
The updated pixel lock screen layout includes shifted At a Glance widgets and an optional compact notification shelf.
Google has shared many announcements today through The Android Show: I/O Edition, including big news in the form of Material 3 Expressive, which is bringing about design refreshes for Android and Wear OS. Curiously, as part of the announcement, Google has shown off two features that it doesn't talk about in the barrage of announcements today: Wallpaper on the Always On Display, and Pixel's new lock screen layout with the compact notification shelf.
You will notice these two secrets hidden in plain sight if you check out the Live Updates GIF in Google's Material 3 Expressive announcement.
This GIF (converted to video format above) starts with the lock screen wallpaper visible on the Always On Display. Google hasn't talked about this feature yet, nor has it been leaked for the Android 16 or beyond releases, so this is the feature's first sighting.
However, the Always On Display wallpaper support isn't new to the Google Pixel lineup. It was first spotted in the Android 9 Pie source code in 2018 and subsequently debuted on the Pixel 3. For whatever reason, subsequent Pixels don't have this feature, while iPhones and even Samsung Galaxy phones have had this for a while.
The GIF also shows off the new Pixel lock screen layout and the compact notification shelf that we have previously leaked. As you'll notice in the images above, the At a Glance widget's date and weather complications have moved to the right of the clock, which we predicted back then. At the time, we noted that these complications also move below the clock when centered, but the GIF doesn't show that yet.
The compact notification shelf is expected to be an optional setting that collapses notifications on the lock screen for a cleaner lock screen look. Instead of showing the full notification preview, only the app's icon appears in a small, slightly transparent chip located below the At a Glance widget's contextual information complication, ensuring it doesn't obstruct the wallpaper. Tapping this chip expands the notification shade, revealing all pending notifications.
Here's a better look at the updated lock screen layout, from our previous coverage on Android's big UI overhaul:
Mishaal Rahman / Android Authority
Old vs new lock screen notification shelf in Android
These changes are not coming to Android 16's first release, but could come with future Android 16 QPR releases or even Android 17. We'll keep you updated when we learn more.
Got a tip? Talk to us! Email our staff at
Email our staff at news@androidauthority.com . You can stay anonymous or get credit for the info, it's your choice.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google quietly paused the rollout of its AI-powered ‘Ask Photos' search feature
Google quietly paused the rollout of its AI-powered ‘Ask Photos' search feature

The Verge

time28 minutes ago

  • The Verge

Google quietly paused the rollout of its AI-powered ‘Ask Photos' search feature

Google is pausing the rollout of its AI-powered 'Ask Photos' feature within Google Photos, which has been slowly expanding since last fall. 'Ask Photos isn't where it needs to be,' wrote Jamie Aspinall, a product manager for Google Photos, in a post on X responding to criticism, citing three factors: latency, quality, and user experience. The experimental feature is powered by Google's 'most capable' Gemini AI models. Specifically, it's a specialized version of its Gemini models that are 'only used for Ask Photos,' according to Google. Aspinall said Google had paused the feature's rollout 'at very small numbers while we address these issues,' and that in about two weeks, the team would ship a better version 'that brings back the speed and recall of the original search. At the same time, Google also announced Tuesday that keyword search in Photos is getting better, allowing you to use quotes to find exact text matches within 'filenames, camera models, captions, or text within photos,' or search without quotes to include visual matches too. Google announced the feature last May at I/O 2024, and positioned it as a way to query your Photos app for common-sense questions that another human would typically have to help with — i.e., asking about which themes you've chosen in the past for a child's birthday party, or which national parks you've visited. 'Gemini's multimodal capabilities can help understand exactly what's happening in each photo and can even read text in the image if required,' the company wrote in the announcement. 'Ask Photos then crafts a helpful response and picks which photos and videos to return.' It's not the first time Google has paused the rollout of an AI-powered feature, as it competes in a quickly intensifying AI arms race against other tech giants and startups alike. Last May, within weeks of debuting 'AI Overview' in Google Search, Google paused the feature after nonsensical and inaccurate answers went viral on social media, with no way to opt out of usage. Two high-profile examples: The feature called Barack Obama the first Muslim president of the United States, and recommended users put glue on pizza to keep the cheese on. And last February, Google rolled out Gemini's image-generation tool with a good deal of fanfare, then paused the feature that same month after users reported historical inaccuracies, such as an AI-generated image depicting the U.S. Founding Fathers as people of color.

Barclays Warns Chrome Sale Could Sink Google 25%
Barclays Warns Chrome Sale Could Sink Google 25%

Yahoo

time34 minutes ago

  • Yahoo

Barclays Warns Chrome Sale Could Sink Google 25%

Alphabet (NASDAQ:GOOG) could see its shares tumble as much as 25% if a court orders the divestiture of its Chrome browser, Barclays warns. During closing arguments in the Department of Justice's antitrust case, the DOJ pressed for Google to spin off Chrome and grant rivals equal access to search data, a move aimed at breaking Google's search dominance. Judge Amit Mehta is expected to issue a remedy ruling in August, and while Google plans to appeal, Barclays analyst Ross Sandler estimates that selling Chromeused by 4 billion peoplecould shave 30% off EPS, given the browser's 35% contribution to search revenue. Despite the headwind, Sandler maintains a Buy on GOOGL, forecasting 30% upside from current levels and noting that a forced sale remains unlikely. Should divestiture occur, potential buyers include deep-pocketed AI players like OpenAI, Anthropic or Perplexity. Investors should care because Chrome isn't just a browserit's a gateway to search, ads and data, and losing it would disrupt Google's ecosystem, dampen ad monetization and require a strategic pivot in its core business model. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DeepSeek may have used Google's Gemini to train its latest model
DeepSeek may have used Google's Gemini to train its latest model

Yahoo

timean hour ago

  • Yahoo

DeepSeek may have used Google's Gemini to train its latest model

Last week, Chinese lab DeepSeek released an updated version of its R1 reasoning AI model that performs well on a number of math and coding benchmarks. The company didn't reveal the source of the data it used to train the model, but some AI researchers speculate that at least a portion came from Google's Gemini family of AI. Sam Paeach, a Melbourne-based developer who creates "emotional intelligence" evaluations for AI, published what he claims is evidence that DeepSeek's latest model was trained on outputs from Gemini. DeepSeek's model, called R1-0528, prefers words and expressions similar to those Google's Gemini 2.5 Pro favors, said Paeach in an X post. That's not a smoking gun. But another developer, the pseudonymous creator of a "free speech eval" for AI called SpeechMap, noted the DeepSeek model's traces — the "thoughts" the model generates as it works toward a conclusion — "read like Gemini traces." DeepSeek has been accused of training on data from rival AI models before. In December, developers observed that DeepSeek's V3 model often identified itself as ChatGPT, OpenAI's AI-powered chatbot platform, suggesting that it may've been trained on ChatGPT chat logs. Earlier this year, OpenAI told the Financial Times it found evidence linking DeepSeek to the use of distillation, a technique to train AI models by extracting data from bigger, more capable ones. According to Bloomberg, Microsoft, a close OpenAI collaborator and investor, detected that large amounts of data were being exfiltrated through OpenAI developer accounts in late 2024 — accounts OpenAI believes are affiliated with DeepSeek. Distillation isn't an uncommon practice, but OpenAI's terms of service prohibit customers from using the company's model outputs to build competing AI. To be clear, many models misidentify themselves and converge on the same words and turns of phrases. That's because the open web, which is where AI companies source the bulk of their training data, is becoming littered with AI slop. Content farms are using AI to create clickbait, and bots are flooding Reddit and X. This "contamination," if you will, has made it quite difficult to thoroughly filter AI outputs from training datasets. Still, AI experts like Nathan Lambert, a researcher at the nonprofit AI research institute AI2, don't think it's out of the question that DeepSeek trained on data from Google's Gemini. "If I was DeepSeek, I would definitely create a ton of synthetic data from the best API model out there," Lambert wrote in a post on X. "[DeepSeek is] short on GPUs and flush with cash. It's literally effectively more compute for them." Partly in an effort to prevent distillation, AI companies have been ramping up security measures. In April, OpenAI began requiring organizations to complete an ID verification process in order to access certain advanced models. The process requires a government-issued ID from one of the countries supported by OpenAI's API; China isn't on the list. Elsewhere, Google recently began "summarizing" the traces generated by models available through its AI Studio developer platform, a step that makes it more challenging to train performant rival models on Gemini traces. Anthropic in May said it would start to summarize its own model's traces, citing a need to protect its "competitive advantages." We've reached out to Google for comment and will update this piece if we hear back. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store