Google Photos will add a hidden watermark to your AI-edited images
Google's bet on AI is no secret, and it becomes evident the moment you launch any of its software products. The Google Photos app has been one of the earliest recipients of all this AI love. Now, it's time for some transparency.
Remember Magic Editor, your gateway to AI-powered editing in the Google Photos app? Moving forward, images that have received an AI makeover using the Reimagine tool in Magic Editor will get an invisible watermark.
Reimagine lets you make edits using natural language commands. All you have to do is select the elements you want to play with, and then describe the desired changes as a sentence. It can change the background, remove certain items, and add new elements, among other tricks.
You won't see the watermark though, as that happens at the Pixel level in AI-edited photos. Google is using the SynthID digital watermarking tool to label photos that have had an artistic lift from its AI.
It may not always be accurate, especially when dealing with subtle changes. 'In some cases, edits made using Reimagine may be too small for SynthID to label and detect — like if you change the color of a small flower in the background of an image,' says the company.
SynthID was developed by Google DeepMind as a digital watermarking tool for AI-generated visual media. It can not be perceived by the human eye, but machines and online systems can flag it, including Google Search.
When the watermark is added to a picture, it doesn't affect its quality. Even if you crop the AI-generated picture, change the color profile, add filters, or compress it, SynthID will retain the AI signature.
Aside from images created by Google's Imagen model, SynthID has also been baked at the heart of clips generated by the Veo video generation model.
The role of AI editing in an image can also be confirmed by checking out the 'About this image' data. You can access this section for online images using Chrome browser and within Google Image Search.
Aside from giving information such as the date when an image was first indexed by Google Search and where it first popped up online, it will also provide details about its AI origins.
The 'About this image' data can also be accessed using the Circle to Search feature on smartphones and via Google Lens in the Google mobile app for Android and iOS platforms. Whether you get copyright protection for such images depends on the extent of AI used.
Google's approach is different from standards such as C2PA, which are also gaining traction and employ cryptography methods to modify the image metadata. Notably, Google is also a committee member of the Coalition for Content Provenance and Authenticity (C2PA), alongside Amazon, Meta, OpenAI, and Microsoft.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
35 minutes ago
- Forbes
The Wiretap: Trump Says Goodbye To The AI Safety Institute
The Wiretap is your weekly digest of cybersecurity, internet privacy and surveillance news. To get it in your inbox, subscribe here. (Photo by Jim WATSON / AFP) (Photo by JIM WATSON/AFP via Getty Images) The Trump administration has announced plans to reorganize the U.S. AI Safety Institute (AISI) into the new Center for AI Standards and Innovation (CAISI). Set up by the Biden administration in 2023, AISI operated within the National Institute of Standards & Technology (NIST) to research risks in widely-used AI systems like OpenAI's ChatGPT or Anthropic's Claude. The move to dismantle the body had been expected for some time. In February, as JD Vance headed to France for a major AI summit, his delegation did not include anyone from the AI Safety Institute, Reuters reported at the time. The agency's inaugural director Elizabeth Kelly had stepped down earlier in the month. The Commerce Department's announcement marking the change is thin on details about the reorganization, but it appears the aim is to favor innovation over red tape. 'For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards. CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards,' said Secretary of Commerce Howard Lutnick. What could be gleaned from Lutnick's paradoxical phrasing – national security-focused standards are limiting, but America needs national security-focused standards – is that it's very difficult to tell just how much the new body will differ from the old one. The announcement goes on to state that CAISI will 'assist industry to develop voluntary standards' in AI, which summed up much of what the old body did. Similarly, just as the AI Safety Institute was tasked with assessing risks in artificial intelligence, CAISI will 'lead unclassified evaluations of AI capabilities that may pose risks to national security.' CAISI will also still be a part of NIST. And, despite Lutnick's apparent disdain for standards, the Commerce press release concludes that CAISI will 'ensure U.S. dominance of international AI standards.' That there's little obvious change between the Institute and CAISI might alleviate any immediate concerns the U.S. is abandoning commitments to keep AI safe. Just earlier this year, a coalition of companies, nonprofits and academics called on Congress to codify the Institute's existence before the year was up. That included major players like OpenAI and Anthropic, both of which had agreements to work with the agency on research projects. What happens to those is now up in the air. The Commerce Department hadn't responded to a series of questions at the time of publication, and NIST declined to comment. Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964. (Photo by Melina Mara-Pool/Getty Images) Unknown individuals have impersonated President Trump's chief of staff Susie Wiles in calls and texts to Republican lawmakers and business executives. Investigators suspect the perpetrators used artificial intelligence to clone Wiles' voice. One lawmaker was asked by the impersonator to assemble a list of individuals for potential presidential pardons, according to the Wall Street Journal. It's unclear that motives lay behind the impersonation, or how they pulled the stunt off. Wiles had told confidantes that some of her contacts from her personal phone had been stolen by a hacker. A Texas police officer searched Flock Safety's AI-powered surveillance camera network to track down a woman who had carried out a self-administered abortion, 404 Media reports. Because the search was conducted across different states, experts raised concerns about police using Flock to track down individuals getting abortions in states where it's legal before going back home to a state where it's illegal. The cops said they were simply worried about the woman's safety. Nathan Vilas Laatsch, a 28-year-old IT specialist at the Defense Intelligence Agency, has been arrested and charged with leaking state secrets after becoming upset at the Trump administration. The DOJ did not specify to which country Laatsch allegedly tried to pass secrets, sources told the Washington Post it was Germany. He was caught out by undercover agents posing as interested parties, according to the DOJ. Europol announced it had identified more than 2,000 links 'pointing to jihadist and right-wing violent extremist and terrorist propaganda targeting minors.' The agency warned that it had seen terrorists using AI to generate content like short videos and memes 'designed to resonate with younger audiences.' A 63-year-old British man, John Miller, was charged alongside a Chinese national by the Department of Justice with conspiring to ship missiles, air defense radar, drones and unspecified 'cryptographic devices' to China. They're also charged with trying to stalk and harass an individual who was planning protests against Chinese president Xi.
Yahoo
39 minutes ago
- Yahoo
Will new nuclear energy deals generate FOMO mentality in Big Tech?
Constellation Energy (CEG) has inked a 20-year deal to provide tech giant Meta Platforms (META) with power from its clean nuclear energy plant starting in 2027. Mizuho Americas managing director and senior analyst of utilities Anthony Crowdell discusses the Constellation-Meta deal, as well as other energy agreements as grid demand soars to power AI data centers, and the regulation around nuclear plants after President Trump signed an executive order in May to ease restrictions around nuclear reactor development. Catch Yahoo Finance's interview with Nano Nuclear Energy founder, executive chairman, and president Jay Yu on the nuclear energy landscape. To watch more expert insights and analysis on the latest market action, check out more Market Domination here. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
an hour ago
- Yahoo
New Tennessee law criminalizes AI technology for child pornography creation
KNOXVILLE, Tenn. (WATE) — Beginning on July 1, 2025, it will be a crime in Tennessee to create or share technology specifically designed to create or facilitate the creation of AI-generated child pornography. The legislation, sponsored by Senator Ken Yager (R-Kingston), was signed into law by Governor Bill Lee on April 24, and it will take effect on July 1. The law makes it a felony to 'knowingly possess, distribute, or produce any software or technology specifically designed to create or facilitate the creation of AI-generated child sexual abuse material.' Possession will be a Class E felony, distribution will be a Class C felony and production will be a Class B felony. Locals, tourists recall major Gatlinburg crash that injured seven 'When in the wrong hands, artificial intelligence has the ability to make exploitive crimes even worse,' said Tennessee Bureau of Investigation Director David Rausch. 'I applaud the General Assembly and Governor Lee for seeing the value in strengthening our state's laws to better protect Tennesseans, and I'm proud our state is leading the way on common sense measures to ensure this emerging technology doesn't become a dangerous tool for bad actors.' Yager said the goal of the bill was to combat the rise of AI-generated child pornography while preserving legitimate AI applications. A release from his office explained that the legislation does not broadly ban AI, but targets tools with the intent to generate child pornography. The senator worked with the TBI to create the legislation. 'This law is about keeping pace with rapidly evolving technology to protect our children from unthinkable exploitation,' said Yager. 'Bad actors are using AI to create disturbing and abusive content, and Tennessee is taking a strong stand to stop it. By criminalizing the tools used to generate AI child pornography, we're giving law enforcement what they need to pursue offenders and protect victims.' What state laws protect kids against AI-generated deepfakes? This is not the first law in Tennessee targeting AI and child pornography. In 2024, the state added AI-generated images to the state's anti-child pornography laws. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.