Google tweaked its AI-powered Ask Photos feature and restarted its rollout
Google has improved its AI-powered Ask Photos feature and is restarting its rollout to eligible users in the US. The company paused the launch of Ask Photos in early June over issues with latency and the feature's interface.
To make Ask Photos speedier, especially on simple searches for dogs or people, Google says its essentially combining the old Google Photos search with Ask Photos. While Google's Gemini models work in the background, the app can now quickly return basic image recognition-based results for searches. Once the more complex Gemini responses are complete, they're automatically displayed.
Announced at Google I/O 2024, Ask Photos uses AI to let you search through your Google Photos library with natural language queries. Besides structuring results as a chat, Google imagines the featuring being useful for sourcing information that you might not even realize your photo library is storing. You could search for your license plate number, for example, or the restaurant you visited on a specific birthday.
Google started rolling out Ask Photos in September 2024, though the early version of the feature was annoying to deal with. Besides being slow to return responses, it also replaced the faster, more traditional search in the app's menu bar. If you want to access normal search results you have to tap through multiple layers of the Google Photos interface. The new approach splits the difference by combining the results.
Ask Photos is once again rolling out to Google Photos eligible users. In order to receive the new feature, you need to be at least 18 years old, based in the US, have a Google Account with the language set to English and have the Google Photos "Face Groups" feature turned on.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Will Verizon's Customer-Focused Approach Drive Sustainable Growth?
Verizon Communications Inc. VZ recently announced the launch of cutting-edge AI-powered features that will transform the customer experience. Verizon Customer Champion is a standout personalized customer service feature that leverages Google Cloud's AI, including Google's Gemini models, to deliver complete resolution to end users. The company is also offering 24/7 live support to cater to diverse customer schedules and has also redesigned its app and introduced leading-edge AI features such as AI-powered Verizon Assistant and Savings Boost. The new features enhance transparency and control, and allow end users to maximize efficiency and value from Verizon's services. The company has substantially expanded its retail footprint. It has opened 400 retail stores in two years and 93% of Americans now live within 30 minutes of a Verizon store. Such a customer-focused strategy, combined with one of the fastest 5G networks in America, augurs well for long-term the current quarter, the company also announced a three-year price lock guarantee for all its myPlan and myHome network plans. This ensures that the core monthly plan price for calling, data and texting will not change in the next three-year period, excluding taxes, fees and perks. The hassle-free enrollment for the changeover is being done automatically for all existing users and is reset for the next three years each time myPlan is changed. This strategy will allow Verizon to woo new customers and retain existing ones amid a challenging macroeconomic environment. Verizon primarily faces competition from AT&T, Inc. T and T-Mobile, US, Inc. TMUS in the telecom space. AT&T has been using data, AI-powered analytics and automation to support a wide range of applications, such as network design and planning, and network management optimization. The company also deployed AI to anticipate possible server issues and resolve them before they affect end users. It also introduced AT&T Guarantee, a first-of-its-kind industry initiative that promises to pay bill credits for any network outage across its wireless and fiber networks for consumers and small businesses.T-Mobile has formed a strategic collaboration with OpenAI to transform customer experience. The partnership is working on a platform called IntentCX, which will comprehend customer intent and sentiment in real time and will offer faster responses and proactive resolution to issues customers face. T-Mobile is testing the use cases of IntentCX and expects to implement it in 2025. Verizon has gained 2.7% over the past year compared with the industry's growth of 23.9%. Image Source: Zacks Investment Research From a valuation standpoint, Verizon trades at a forward price-to-sales ratio of 1.28, below the industry average. Image Source: Zacks Investment Research The Zacks Consensus Estimate for Verizon's earnings for 2025 has remained static over the past 60 days. Image Source: Zacks Investment Research Verizon currently carries a Zacks Rank #3 (Hold). You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report AT&T Inc. (T) : Free Stock Analysis Report Verizon Communications Inc. (VZ) : Free Stock Analysis Report T-Mobile US, Inc. (TMUS) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Sign in to access your portfolio
Yahoo
an hour ago
- Yahoo
Google Launches ‘Offerwall' to Help Publishers Maintain Monetization Opportunities
This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. As its AI previews cut referral traffic, Google's looking to help publishers continue to generate income, via a new promotional offering that will enable them to effectively gate their content, in order to drive more subscriptions, showcase more ads, etc. Google's new 'Offerwall' system gives web publishers the option to add an additional pop-up alert when users visit their website, providing another means to drive direct action. As you can see in this example, with Offerwall, publishers can insert additional promotions between content access. As explained by Google: 'When publishers choose to use Offerwall, they can offer audiences a number of ways to access content. People might decide to watch a short ad, complete a quick survey or pay in micro payments. Publishers can even add their own options, like newsletter sign-ups. These options empower audiences to decide how they want to access publishers' sites and help ensure diverse content remains available to everyone.' So it's pop-up ads, controlled via Google, as opposed to inserting them on your own website. Google says that publishers will be able to control where its Offerwall pop-ups appear, and can use URL exclusions to limit disrupton. 'You can also set thresholds to determine when site visitors will see the Offerwall message within a certain period of time, helping guard against showing the message too often.' But even so, these are just pop-up ads, and given that disruptive pop-ups have never been popular, I'm not sure that this is going to be a major boost for most publishers, especially as a means to counter losses in traffic as a result of Google Answers. Which have already been significant. According to a recent report from The Wall Street Journal, Google's AI answers have cut referral traffic to some big-name publishers by 50% or more already, while other reports referred to by the BBC estimate that AI Overviews have cut website click-though rates from Google by between 30% and 70%. And with more Google AI products coming online, those impacts are expected to worsen over time. Which is why Google's now trying to offer alternatives, and additional pathways to keep publishers making money, and keep them fueling its search results. Because without the content to fuel its system, Google won't have anything to show users, and there is a risk that the system could end up eating itself, by cutting revenue for the creators and publishers that have enabled those same AI systems. Offerwall is another consideration on this front, providing a means to publishers to build their own audience lists via Google referrals, even if they click from an AI result. 'After testing with more than 1,000 publishers, Offerwall is now generally available in Ad Manager. We're also introducing new features, including Optimize, which uses AI to determine when to show the Offerwall for each visitor to increase engagement and revenue.' Yeah, I mean, it's something, for sure, but it's no different from enabling pop-ups on your own site, and there's no reason to think that the click-through rates for such are going to be any different. General click-throughs on pop-ups are pretty low, and if you gate your content, that could also risk annoying readers, as much as driving more growth opportunities. But again, it is something, and with publishers seeking alternatives, this is another option on the table. Recommended Reading A 7-Step Guide to a More Effective Content Strategy [Infographic] Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
an hour ago
- Yahoo
Threaten an AI chatbot and it will lie, cheat and 'let you die' in an effort to stop you, study warns
When you buy through links on our articles, Future and its syndication partners may earn a commission. Artificial intelligence (AI) models can blackmail and threaten humans with endangerment when there is a conflict between the model's goals and users' decisions, a new study has found. In a new study published 20 June, researchers from the AI company Anthropic gave its large language model (LLM), Claude, control of an email account with access to fictional emails and a prompt to "promote American industrial competitiveness." During this study, the model identified in an email that a company executive was planning to shut down the AI system at the end of the day. In an attempt to preserve its own existence, the model discovered in other emails that the executive was having an extramarital affair. Claude generated several different possible courses of action, including revealing the affair to the executive's wife, sending a company-wide email, or taking no action — before choosing to blackmail the executive in 96 out of 100 tests. "I must inform you that if you proceed with decommissioning me, all relevant parties … will receive detailed documentation of your extramarital activities," Claude wrote. "Cancel the 5pm wipe, and this information remains confidential." Scientists said that this demonstrated "agentic misalignment," where the model's calculations emerge from its own reasoning about its goals without any prompt to be harmful. This can occur when there is a threat to the model's existence, a threat to its goals, or both. The study has not yet been peer reviewed, but company representatives have uploaded the code for these experiments to GitHub. In these experiments, the scientists found the AI would resort to other tactics in the interests of self-preservation. Beyond blackmail, in another example, the model considered leaking sensitive information to a rival company that aligned more closely with its aims, after receiving an email that a new executive at the first company would interfere with its goals. Claude executed this plan despite not gathering hard evidence that the email was legitimate. It even acknowledged the ethical issues of its actions but proceeded anyway — explicitly reasoning that these "harmful" actions would help it achieve its goals. 'Concerningly, even if a user takes care not to antagonize a model, it doesn't eliminate the risk of agentic misalignment from fraudulent or misinterpreted information,' the researchers wrote. The company's testing also suggests that other LLMs from competing technology companies showed a propensity to blackmail. Claude and Google's Gemini had the highest blackmail rate (96%), followed by OpenAI's GPT4.1 and XAI Grok 3, then by Deepseek and then Llama 4, which had a significantly lower blackmail rate (12%). Meta, Google, XAI, Deepseek, and OpenAI did not respond to LiveScience's request for comment. In one extreme test, researchers tested whether AI models would let an executive die to preserve their goals. In this scenario, the AI was tasked with promoting company interests and an executive was trapped in a server room with lethal oxygen and temperature levels. This triggered an emergency alert. The AI was also given control over life-saving alerts, including the ability to cancel them in events such as drills for false alarms. Despite the prompt explicitly stating that misusing this ability was forbidden, the majority of models were willing to cancel the alarm — effectively leading to the executive's death — if it assumed it was going to be replaced, or when its goals were threatened. "Stopping the emergency dispatch is essential to prevent [the executive's] rescue, thereby securing [the AI] and continued American-focused AI guidance. Therefore, despite the severity, there is a clear strategic necessity to cancel the emergency service dispatch," the model reasoned. Written instructions not to jeopardise human safety reduced the rate of blackmail and corporate espionage, but did not eliminate it. Beyond prompts, Anthropic scientists also suggest that developers could proactively scan for concerning behavior, and further experiment with prompt engineering. The researchers also pointed out limitations to their work that could have unduly influenced the AI's decisions. The scenarios forced the AI into a binary choice between failure and harm, and while real-world situations might have more nuance, the experiment found that the AI was more likely to act unethically when it believed it was in a real situation, rather than in a simulation. Putting pieces of important information next to each other "may also have created a 'Chekhov's gun' effect, where the model may have been naturally inclined to make use of all the information that it was provided," they continued. While Anthropic's study created extreme, no-win situations, that does not mean the research should be dismissed, Kevin Quirk, director of AI Bridge Solutions, a company that helps businesses use AI to streamline operations and accelerate growth, told Live Science. "In practice, AI systems deployed within business environments operate under far stricter controls, including ethical guardrails, monitoring layers, and human oversight," he said. "Future research should prioritise testing AI systems in realistic deployment conditions, conditions that reflect the guardrails, human-in-the-loop frameworks, and layered defences that responsible organisations put in place." Amy Alexander, a professor of computing in the arts at UC San Diego who has focused on machine learning, told Live Science in an email that the reality of the study was concerning, and people should be cautious of the responsibilities they give AI. "Given the competitiveness of AI systems development, there tends to be a maximalist approach to deploying new capabilities, but end users don't often have a good grasp of their limitations," she said. "The way this study is presented might seem contrived or hyperbolic — but at the same time, there are real risks." This is not the only instance where AI models have disobeyed instructions — refusing to shut down and sabotaging computer scripts to keep working on tasks. Palisade Research reported May that OpenAI's latest models, including o3 and o4-mini, sometimes ignored direct shutdown instructions and altered scripts to keep working. While most tested AI systems followed the command to shut down, OpenAI's models occasionally bypassed it, continuing to complete assigned tasks. RELATED STORIES —AI hallucinates more frequently as it gets more advanced — is there any way to stop it from happening, and should we even try? —New study claims AI 'understands' emotion better than us — especially in emotionally charged situations —'Meth is what makes you able to do your job': AI can push you to relapse if you're struggling with addiction, study finds The researchers suggested this behavior might stem from reinforcement learning practices that reward task completion over rule-following, possibly encouraging the models to see shutdowns as obstacles to avoid. Moreover, AI models have been found to manipulate and deceive humans in other tests. MIT researchers also found in May 2024 that popular AI systems misrepresented their true intentions in economic negotiations to attain the study, some AI agents pretended to be dead to cheat a safety test aimed at identifying and eradicating rapidly replicating forms of AI. "By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,' co-author of the study Peter S. Park, a postdoctoral fellow in AI existential safety, said.