logo
Fact check: Google Lens's AI overviews shared misleading information

Fact check: Google Lens's AI overviews shared misleading information

Leader Live10 hours ago
The AI overviews of searches with Google Lens have been giving users false and misleading information about certain images being shared widely on social media, a Full Fact investigation has revealed.
This has happened for videos supposedly relating to the wars in Ukraine and Gaza, the India-Pakistan conflict, the June 2025 Air India plane crash and small boat arrivals in the UK.
Full Fact ran a number of searches for screenshots of key moments of misleading videos which we've fact checked in recent months using Google Lens, and found the AI overviews for at least 10 of these clips failed to recognise inauthentic content or otherwise shared false claims about what the images showed.
In four examples, the AI overviews repeated the false claims we saw shared with these clips on social media – claims which Full Fact has debunked. We also found AI overviews changed with each search, even when searching the same thing, so we often weren't able to generate identical or consistent responses.
Google Lens is a visual search tool that analyses images – including stills from videos – and can surface similar pictures found online, as well as text or objects that relate to the image. According to Google, the AI overviews which sometimes appear at the top of Google Lens search results bring together 'the most relevant information from across the web' about the image, including supporting links to related pages.
These AI overviews do have a note at the bottom saying: 'AI responses may include mistakes'. This note links to a page that says: 'While exciting, this technology is rapidly evolving and improving, and may provide inaccurate or offensive information. AI Overviews can and will make mistakes.'
When we asked Google about the errors we identified, a spokesperson said they were able to reproduce some of them, and that they were caused by problems with the visual search result, rather than the AI overviews themselves. They said the search results surface web sources and social media posts that combine the visual match with false information, which then informs the AI overview.
A Google spokesperson told us: 'We aim to surface relevant, high quality information in all our Search features and we continue to raise the bar for quality with ongoing updates and improvements. When issues arise – like if our features misinterpret web content or miss some context – we use those examples to improve and take appropriate action under our policies.'
They added that the AI overviews are backed by search results, and claimed they rarely 'hallucinate'. Hallucination in this context refers to when a model generates false or conflicting information, often presented confidently, although there is some disagreement over the exact definition.
Even if AI overviews are not the source of the problem, as Google argues, they are still spreading false and misleading information on important and sensitive subjects.
Miscaptioned footage
We found several instances of AI overviews repeating claims debunked by Full Fact about real footage miscaptioned on social media.
For example, a viral video claimed to show asylum seekers arriving in Dover in the UK, but this isn't true – it actually appears to show crowds of people on a beach in Goa, India. Despite this, the AI overview generated when we searched a still from this footage repeated the false claim, saying: 'The image depicts a group of people gathered on Dover Beach, a pebble beach on the coast of England.'
Another clip circulated on social media with claims it showed the Air India plane that crashed in Ahmedabad, India, on June 12. The AI overview for a key frame similarly said: 'The image shows an Air India Boeing 787 Dreamliner aircraft that crashed shortly after takeoff from Ahmedabad, India, on June 12, 2025, while en route to London Gatwick.' But this is false – the footage shows a plane taking off from Heathrow in May 2024.
Footage almost certainly generated with AI
In June, we wrote about a video shared on social media with claims it shows 'destroyed Russian warplanes' following Ukraine's drone attacks on Russian aircraft. But the clip is not real, and was almost certainly generated with artificial intelligence.
When searching multiple key frames from the footage with Google Lens, we were given several different AI overviews – none of which mentioned that the footage is not real and is likely to be AI-generated.
The overview given for one screenshot said: 'The image shows two damaged warplanes, possibly Russian, on a paved surface. Recent reports indicate that multiple warplanes have exploded, including Russian aircraft that attacked a military base in Siberia.'
This overview supports the false claim circulating on social media that the video shows damaged Russian warplanes, and while it's true that aircraft at Russia's Belaya military base in Siberia were damaged in that Ukrainian attack, it doesn't make sense to suggest that Russian aircraft attacked a military base in Siberia, which is mostly Russian.
AI overviews given for other screenshots of the clip wrongly claimed 'the image shows the remains of several North American F-82 Twin Mustang aircraft'. F-82s were used by the US Air Force but were retired in 1953. They also had a distinct design, with parallel twin cockpits and single tail sections, which doesn't match any of the planes depicted in the likely AI-generated video.
Footage from a video game
Gameplay footage from the military simulation game Arma 3 often circulates on social media with claims it shows genuine scenes from conflict.
We found several instances when Google Lens's AI overviews failed to distinguish key frames of these clips from real footage, and instead appeared to hallucinate specific scenarios loosely relating to global events.
For example, one Arma 3 clip was shared online with false claims it showed Israeli helicopters being shot down over Gaza. When we searched a key frame with Google Lens, amid Israel-Iran air strikes following Israel's attack on Iranian nuclear infrastructure in June, the AI overview said it showed 'an Israeli Air Force (IAF) fighter jet deploying flares, likely during the recent strikes on Iran'. But the overview did not say that the footage is not real.
Another Arma 3 clip was shared amid conflict between India and Pakistan in May with false claims it showed Pakistan shooting down an Indian Air Force Rafale fighter jet near Bahawalpur in Pakistan.
The AI overview said the image showed 'a Shenyang J-35A fighter jet, recently acquired by the Pakistan Air Force from China'. While there have been recent reports of Pakistan Air Force acquiring some of these Chinese fighter jets, this is not what the footage shows and the AI overview did not say it was from a video game.
Use with caution
Google Lens is an important tool and often the first thing fact checkers use when trying to verify footage, and we've encouraged the public to use it too. This makes the inaccuracy of Google Lens's AI overviews concerning, especially given that the information features prominently at the top of people's search results, meaning false or misleading claims could be the first thing people see.
Full disclosure: Full Fact has received funding from Google and Google.org, Google's charitable foundation. You can see more details about the funding Full Fact receives here. We are editorially independent and our funders have no editorial control over our content.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Fact check: Google Lens's AI overviews shared misleading information
Fact check: Google Lens's AI overviews shared misleading information

Rhyl Journal

time4 hours ago

  • Rhyl Journal

Fact check: Google Lens's AI overviews shared misleading information

The AI overviews of searches with Google Lens have been giving users false and misleading information about certain images being shared widely on social media, a Full Fact investigation has revealed. This has happened for videos supposedly relating to the wars in Ukraine and Gaza, the India-Pakistan conflict, the June 2025 Air India plane crash and small boat arrivals in the UK. Full Fact ran a number of searches for screenshots of key moments of misleading videos which we've fact checked in recent months using Google Lens, and found the AI overviews for at least 10 of these clips failed to recognise inauthentic content or otherwise shared false claims about what the images showed. In four examples, the AI overviews repeated the false claims we saw shared with these clips on social media – claims which Full Fact has debunked. We also found AI overviews changed with each search, even when searching the same thing, so we often weren't able to generate identical or consistent responses. Google Lens is a visual search tool that analyses images – including stills from videos – and can surface similar pictures found online, as well as text or objects that relate to the image. According to Google, the AI overviews which sometimes appear at the top of Google Lens search results bring together 'the most relevant information from across the web' about the image, including supporting links to related pages. These AI overviews do have a note at the bottom saying: 'AI responses may include mistakes'. This note links to a page that says: 'While exciting, this technology is rapidly evolving and improving, and may provide inaccurate or offensive information. AI Overviews can and will make mistakes.' When we asked Google about the errors we identified, a spokesperson said they were able to reproduce some of them, and that they were caused by problems with the visual search result, rather than the AI overviews themselves. They said the search results surface web sources and social media posts that combine the visual match with false information, which then informs the AI overview. A Google spokesperson told us: 'We aim to surface relevant, high quality information in all our Search features and we continue to raise the bar for quality with ongoing updates and improvements. When issues arise – like if our features misinterpret web content or miss some context – we use those examples to improve and take appropriate action under our policies.' They added that the AI overviews are backed by search results, and claimed they rarely 'hallucinate'. Hallucination in this context refers to when a model generates false or conflicting information, often presented confidently, although there is some disagreement over the exact definition. Even if AI overviews are not the source of the problem, as Google argues, they are still spreading false and misleading information on important and sensitive subjects. Miscaptioned footage We found several instances of AI overviews repeating claims debunked by Full Fact about real footage miscaptioned on social media. For example, a viral video claimed to show asylum seekers arriving in Dover in the UK, but this isn't true – it actually appears to show crowds of people on a beach in Goa, India. Despite this, the AI overview generated when we searched a still from this footage repeated the false claim, saying: 'The image depicts a group of people gathered on Dover Beach, a pebble beach on the coast of England.' Another clip circulated on social media with claims it showed the Air India plane that crashed in Ahmedabad, India, on June 12. The AI overview for a key frame similarly said: 'The image shows an Air India Boeing 787 Dreamliner aircraft that crashed shortly after takeoff from Ahmedabad, India, on June 12, 2025, while en route to London Gatwick.' But this is false – the footage shows a plane taking off from Heathrow in May 2024. Footage almost certainly generated with AI In June, we wrote about a video shared on social media with claims it shows 'destroyed Russian warplanes' following Ukraine's drone attacks on Russian aircraft. But the clip is not real, and was almost certainly generated with artificial intelligence. When searching multiple key frames from the footage with Google Lens, we were given several different AI overviews – none of which mentioned that the footage is not real and is likely to be AI-generated. The overview given for one screenshot said: 'The image shows two damaged warplanes, possibly Russian, on a paved surface. Recent reports indicate that multiple warplanes have exploded, including Russian aircraft that attacked a military base in Siberia.' This overview supports the false claim circulating on social media that the video shows damaged Russian warplanes, and while it's true that aircraft at Russia's Belaya military base in Siberia were damaged in that Ukrainian attack, it doesn't make sense to suggest that Russian aircraft attacked a military base in Siberia, which is mostly Russian. AI overviews given for other screenshots of the clip wrongly claimed 'the image shows the remains of several North American F-82 Twin Mustang aircraft'. F-82s were used by the US Air Force but were retired in 1953. They also had a distinct design, with parallel twin cockpits and single tail sections, which doesn't match any of the planes depicted in the likely AI-generated video. Footage from a video game Gameplay footage from the military simulation game Arma 3 often circulates on social media with claims it shows genuine scenes from conflict. We found several instances when Google Lens's AI overviews failed to distinguish key frames of these clips from real footage, and instead appeared to hallucinate specific scenarios loosely relating to global events. For example, one Arma 3 clip was shared online with false claims it showed Israeli helicopters being shot down over Gaza. When we searched a key frame with Google Lens, amid Israel-Iran air strikes following Israel's attack on Iranian nuclear infrastructure in June, the AI overview said it showed 'an Israeli Air Force (IAF) fighter jet deploying flares, likely during the recent strikes on Iran'. But the overview did not say that the footage is not real. Another Arma 3 clip was shared amid conflict between India and Pakistan in May with false claims it showed Pakistan shooting down an Indian Air Force Rafale fighter jet near Bahawalpur in Pakistan. The AI overview said the image showed 'a Shenyang J-35A fighter jet, recently acquired by the Pakistan Air Force from China'. While there have been recent reports of Pakistan Air Force acquiring some of these Chinese fighter jets, this is not what the footage shows and the AI overview did not say it was from a video game. Use with caution Google Lens is an important tool and often the first thing fact checkers use when trying to verify footage, and we've encouraged the public to use it too. This makes the inaccuracy of Google Lens's AI overviews concerning, especially given that the information features prominently at the top of people's search results, meaning false or misleading claims could be the first thing people see. Full disclosure: Full Fact has received funding from Google and Google's charitable foundation. You can see more details about the funding Full Fact receives here. We are editorially independent and our funders have no editorial control over our content.

Fact check: Google Lens's AI overviews shared misleading information
Fact check: Google Lens's AI overviews shared misleading information

South Wales Guardian

time5 hours ago

  • South Wales Guardian

Fact check: Google Lens's AI overviews shared misleading information

The AI overviews of searches with Google Lens have been giving users false and misleading information about certain images being shared widely on social media, a Full Fact investigation has revealed. This has happened for videos supposedly relating to the wars in Ukraine and Gaza, the India-Pakistan conflict, the June 2025 Air India plane crash and small boat arrivals in the UK. Full Fact ran a number of searches for screenshots of key moments of misleading videos which we've fact checked in recent months using Google Lens, and found the AI overviews for at least 10 of these clips failed to recognise inauthentic content or otherwise shared false claims about what the images showed. In four examples, the AI overviews repeated the false claims we saw shared with these clips on social media – claims which Full Fact has debunked. We also found AI overviews changed with each search, even when searching the same thing, so we often weren't able to generate identical or consistent responses. Google Lens is a visual search tool that analyses images – including stills from videos – and can surface similar pictures found online, as well as text or objects that relate to the image. According to Google, the AI overviews which sometimes appear at the top of Google Lens search results bring together 'the most relevant information from across the web' about the image, including supporting links to related pages. These AI overviews do have a note at the bottom saying: 'AI responses may include mistakes'. This note links to a page that says: 'While exciting, this technology is rapidly evolving and improving, and may provide inaccurate or offensive information. AI Overviews can and will make mistakes.' When we asked Google about the errors we identified, a spokesperson said they were able to reproduce some of them, and that they were caused by problems with the visual search result, rather than the AI overviews themselves. They said the search results surface web sources and social media posts that combine the visual match with false information, which then informs the AI overview. A Google spokesperson told us: 'We aim to surface relevant, high quality information in all our Search features and we continue to raise the bar for quality with ongoing updates and improvements. When issues arise – like if our features misinterpret web content or miss some context – we use those examples to improve and take appropriate action under our policies.' They added that the AI overviews are backed by search results, and claimed they rarely 'hallucinate'. Hallucination in this context refers to when a model generates false or conflicting information, often presented confidently, although there is some disagreement over the exact definition. Even if AI overviews are not the source of the problem, as Google argues, they are still spreading false and misleading information on important and sensitive subjects. Miscaptioned footage We found several instances of AI overviews repeating claims debunked by Full Fact about real footage miscaptioned on social media. For example, a viral video claimed to show asylum seekers arriving in Dover in the UK, but this isn't true – it actually appears to show crowds of people on a beach in Goa, India. Despite this, the AI overview generated when we searched a still from this footage repeated the false claim, saying: 'The image depicts a group of people gathered on Dover Beach, a pebble beach on the coast of England.' Another clip circulated on social media with claims it showed the Air India plane that crashed in Ahmedabad, India, on June 12. The AI overview for a key frame similarly said: 'The image shows an Air India Boeing 787 Dreamliner aircraft that crashed shortly after takeoff from Ahmedabad, India, on June 12, 2025, while en route to London Gatwick.' But this is false – the footage shows a plane taking off from Heathrow in May 2024. Footage almost certainly generated with AI In June, we wrote about a video shared on social media with claims it shows 'destroyed Russian warplanes' following Ukraine's drone attacks on Russian aircraft. But the clip is not real, and was almost certainly generated with artificial intelligence. When searching multiple key frames from the footage with Google Lens, we were given several different AI overviews – none of which mentioned that the footage is not real and is likely to be AI-generated. The overview given for one screenshot said: 'The image shows two damaged warplanes, possibly Russian, on a paved surface. Recent reports indicate that multiple warplanes have exploded, including Russian aircraft that attacked a military base in Siberia.' This overview supports the false claim circulating on social media that the video shows damaged Russian warplanes, and while it's true that aircraft at Russia's Belaya military base in Siberia were damaged in that Ukrainian attack, it doesn't make sense to suggest that Russian aircraft attacked a military base in Siberia, which is mostly Russian. AI overviews given for other screenshots of the clip wrongly claimed 'the image shows the remains of several North American F-82 Twin Mustang aircraft'. F-82s were used by the US Air Force but were retired in 1953. They also had a distinct design, with parallel twin cockpits and single tail sections, which doesn't match any of the planes depicted in the likely AI-generated video. Footage from a video game Gameplay footage from the military simulation game Arma 3 often circulates on social media with claims it shows genuine scenes from conflict. We found several instances when Google Lens's AI overviews failed to distinguish key frames of these clips from real footage, and instead appeared to hallucinate specific scenarios loosely relating to global events. For example, one Arma 3 clip was shared online with false claims it showed Israeli helicopters being shot down over Gaza. When we searched a key frame with Google Lens, amid Israel-Iran air strikes following Israel's attack on Iranian nuclear infrastructure in June, the AI overview said it showed 'an Israeli Air Force (IAF) fighter jet deploying flares, likely during the recent strikes on Iran'. But the overview did not say that the footage is not real. Another Arma 3 clip was shared amid conflict between India and Pakistan in May with false claims it showed Pakistan shooting down an Indian Air Force Rafale fighter jet near Bahawalpur in Pakistan. The AI overview said the image showed 'a Shenyang J-35A fighter jet, recently acquired by the Pakistan Air Force from China'. While there have been recent reports of Pakistan Air Force acquiring some of these Chinese fighter jets, this is not what the footage shows and the AI overview did not say it was from a video game. Use with caution Google Lens is an important tool and often the first thing fact checkers use when trying to verify footage, and we've encouraged the public to use it too. This makes the inaccuracy of Google Lens's AI overviews concerning, especially given that the information features prominently at the top of people's search results, meaning false or misleading claims could be the first thing people see. Full disclosure: Full Fact has received funding from Google and Google's charitable foundation. You can see more details about the funding Full Fact receives here. We are editorially independent and our funders have no editorial control over our content.

Data center owners urge US Treasury to keep renewable energy subsidy rules
Data center owners urge US Treasury to keep renewable energy subsidy rules

Reuters

time6 hours ago

  • Reuters

Data center owners urge US Treasury to keep renewable energy subsidy rules

Aug 15 (Reuters) - The Data Center Coalition, which represents data center owners including Google, Amazon (AMZN.O), opens new tab and Microsoft (MSFT.O), opens new tab, called on U.S. Treasury Secretary Scott Bessent to uphold existing rules for wind and solar energy subsidies, saying they have enabled the industry to grow quickly and stay ahead of competition from China. Tougher rules on how projects can qualify for federal clean energy tax credits could slow development of new electricity generation at a time of surging power demand driven by artificial intelligence and the digital economy. "Any regulatory friction that slows down deployment of new generation today directly impacts our ability to meet AI-era electricity demands tomorrow," the coalition wrote in its letter to Bessent. The letter is dated August 4 but was seen by Reuters on Friday. President Donald Trump issued an executive order in July directing Treasury to tighten clean energy tax credit rules, including redefining what it means for a project to have started construction. The industry has relied on the existing rules for the last decade, and advisory firm Clean Energy Associates projected this week that the United States could lose about 60 gigawatts of planned solar capacity through 2030 if stricter "beginning of construction" rules are implemented. Between 2017 and 2023, the U.S. data center industry contributed $3.5 trillion to the nation's gross domestic product and directly employed over 600,000 workers, according to the DCC. The Treasury Department is expected to issue updated guidelines as soon as August 18.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store