logo
Drivers warned popular map app used by 130 MILLION is being discontinued on some mobiles as updates stop

Drivers warned popular map app used by 130 MILLION is being discontinued on some mobiles as updates stop

The Sun3 days ago
DRIVERS have been issued a warning as a popular app used by 130 million motorists is being discontinued on some mobiles.
Many drivers now choose to use smartphone navigation apps to help them find the best routes and avoid traffic jams.
1
However, users of Google owned app Waze, may soon be unable to access the latest software.
The popular app is ending support for million of Android devices, with the newest versions of the app requiring devices to be running Android 10 or newer.
This means that if your device runs Android 9 or older, you will not be able to download any new features or updates.
This will also apply to any built in car screens that run on Android.
Those with devices running Android 9 or lower will still be able to use the app, but over time it will become less and less reliable.
Waze is often used by motorists for for real time traffic updates, that are sourced from Waze users.
It also has features such as petrol price comparison, making it a popular alternative to Google Maps.
Currently, only the beta version of Waze requires devices to be running Android 10.
The beta version of the app is a preview version you can opt in to that allows you to see new features and updates before anyone else.
However, when changes appear on the beta app, they are likely to end up on the regular version.
Over 10 million Android users told to turn off devices after Google exposes 'infection' – exact list of models affected
Android 10 was released back in 2019, but since there are billions of Android devices in use across the world, it is likely that may of them are still using Android 9.
If your phone is still running on Android 9 or lower, you may want to think about purchasing a new mobile, as more apps are likely to also stop supporting these devices.
More Android news
This follows the news that Google has ended support for three Android devices.
At the end of March, Google quietly ended support for Android 12 and Android 12L - this is the software that some devices run on.
That means the Google Pixel 3a, Samsung Galaxy S10 series, and OnePlus 7 series will no longer receive security updates from Google, according to Android Authority.
If there are future updates for affected phones, they will have to come directly from the manufacturers, Samsung and OnePlus.
However, Samsung only offers seven years worth of security patches, and OnePlus typically offers three.
People with phones still running Android 12 are advised to consider upgrading to a newer device.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Fact check: Google Lens's AI overviews shared misleading information
Fact check: Google Lens's AI overviews shared misleading information

Rhyl Journal

time7 hours ago

  • Rhyl Journal

Fact check: Google Lens's AI overviews shared misleading information

The AI overviews of searches with Google Lens have been giving users false and misleading information about certain images being shared widely on social media, a Full Fact investigation has revealed. This has happened for videos supposedly relating to the wars in Ukraine and Gaza, the India-Pakistan conflict, the June 2025 Air India plane crash and small boat arrivals in the UK. Full Fact ran a number of searches for screenshots of key moments of misleading videos which we've fact checked in recent months using Google Lens, and found the AI overviews for at least 10 of these clips failed to recognise inauthentic content or otherwise shared false claims about what the images showed. In four examples, the AI overviews repeated the false claims we saw shared with these clips on social media – claims which Full Fact has debunked. We also found AI overviews changed with each search, even when searching the same thing, so we often weren't able to generate identical or consistent responses. Google Lens is a visual search tool that analyses images – including stills from videos – and can surface similar pictures found online, as well as text or objects that relate to the image. According to Google, the AI overviews which sometimes appear at the top of Google Lens search results bring together 'the most relevant information from across the web' about the image, including supporting links to related pages. These AI overviews do have a note at the bottom saying: 'AI responses may include mistakes'. This note links to a page that says: 'While exciting, this technology is rapidly evolving and improving, and may provide inaccurate or offensive information. AI Overviews can and will make mistakes.' When we asked Google about the errors we identified, a spokesperson said they were able to reproduce some of them, and that they were caused by problems with the visual search result, rather than the AI overviews themselves. They said the search results surface web sources and social media posts that combine the visual match with false information, which then informs the AI overview. A Google spokesperson told us: 'We aim to surface relevant, high quality information in all our Search features and we continue to raise the bar for quality with ongoing updates and improvements. When issues arise – like if our features misinterpret web content or miss some context – we use those examples to improve and take appropriate action under our policies.' They added that the AI overviews are backed by search results, and claimed they rarely 'hallucinate'. Hallucination in this context refers to when a model generates false or conflicting information, often presented confidently, although there is some disagreement over the exact definition. Even if AI overviews are not the source of the problem, as Google argues, they are still spreading false and misleading information on important and sensitive subjects. Miscaptioned footage We found several instances of AI overviews repeating claims debunked by Full Fact about real footage miscaptioned on social media. For example, a viral video claimed to show asylum seekers arriving in Dover in the UK, but this isn't true – it actually appears to show crowds of people on a beach in Goa, India. Despite this, the AI overview generated when we searched a still from this footage repeated the false claim, saying: 'The image depicts a group of people gathered on Dover Beach, a pebble beach on the coast of England.' Another clip circulated on social media with claims it showed the Air India plane that crashed in Ahmedabad, India, on June 12. The AI overview for a key frame similarly said: 'The image shows an Air India Boeing 787 Dreamliner aircraft that crashed shortly after takeoff from Ahmedabad, India, on June 12, 2025, while en route to London Gatwick.' But this is false – the footage shows a plane taking off from Heathrow in May 2024. Footage almost certainly generated with AI In June, we wrote about a video shared on social media with claims it shows 'destroyed Russian warplanes' following Ukraine's drone attacks on Russian aircraft. But the clip is not real, and was almost certainly generated with artificial intelligence. When searching multiple key frames from the footage with Google Lens, we were given several different AI overviews – none of which mentioned that the footage is not real and is likely to be AI-generated. The overview given for one screenshot said: 'The image shows two damaged warplanes, possibly Russian, on a paved surface. Recent reports indicate that multiple warplanes have exploded, including Russian aircraft that attacked a military base in Siberia.' This overview supports the false claim circulating on social media that the video shows damaged Russian warplanes, and while it's true that aircraft at Russia's Belaya military base in Siberia were damaged in that Ukrainian attack, it doesn't make sense to suggest that Russian aircraft attacked a military base in Siberia, which is mostly Russian. AI overviews given for other screenshots of the clip wrongly claimed 'the image shows the remains of several North American F-82 Twin Mustang aircraft'. F-82s were used by the US Air Force but were retired in 1953. They also had a distinct design, with parallel twin cockpits and single tail sections, which doesn't match any of the planes depicted in the likely AI-generated video. Footage from a video game Gameplay footage from the military simulation game Arma 3 often circulates on social media with claims it shows genuine scenes from conflict. We found several instances when Google Lens's AI overviews failed to distinguish key frames of these clips from real footage, and instead appeared to hallucinate specific scenarios loosely relating to global events. For example, one Arma 3 clip was shared online with false claims it showed Israeli helicopters being shot down over Gaza. When we searched a key frame with Google Lens, amid Israel-Iran air strikes following Israel's attack on Iranian nuclear infrastructure in June, the AI overview said it showed 'an Israeli Air Force (IAF) fighter jet deploying flares, likely during the recent strikes on Iran'. But the overview did not say that the footage is not real. Another Arma 3 clip was shared amid conflict between India and Pakistan in May with false claims it showed Pakistan shooting down an Indian Air Force Rafale fighter jet near Bahawalpur in Pakistan. The AI overview said the image showed 'a Shenyang J-35A fighter jet, recently acquired by the Pakistan Air Force from China'. While there have been recent reports of Pakistan Air Force acquiring some of these Chinese fighter jets, this is not what the footage shows and the AI overview did not say it was from a video game. Use with caution Google Lens is an important tool and often the first thing fact checkers use when trying to verify footage, and we've encouraged the public to use it too. This makes the inaccuracy of Google Lens's AI overviews concerning, especially given that the information features prominently at the top of people's search results, meaning false or misleading claims could be the first thing people see. Full disclosure: Full Fact has received funding from Google and Google's charitable foundation. You can see more details about the funding Full Fact receives here. We are editorially independent and our funders have no editorial control over our content.

Microsoft launches inquiry into claims Israel used its tech for mass surveillance of Palestinians
Microsoft launches inquiry into claims Israel used its tech for mass surveillance of Palestinians

The Guardian

time8 hours ago

  • The Guardian

Microsoft launches inquiry into claims Israel used its tech for mass surveillance of Palestinians

Microsoft has launched an 'urgent' external inquiry into allegations Israel's military surveillance agency has used the company's technology to facilitate the mass surveillance of Palestinians. The company said on Friday the formal review was in response to a Guardian investigation that revealed how the Unit 8200 spy agency has relied on Microsoft's Azure cloud platform to store a vast collection of everyday Palestinian mobile phone calls. The joint investigation with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call found Unit 8200 made use of a customised and segregated area within Azure to store recordings of millions of calls made daily in Gaza and the West Bank. In a statement, Microsoft said 'using Azure for the storage of data files of phone calls obtained through broad or mass surveillance of civilians in Gaza and the West Bank' would be prohibited by its terms of service. The inquiry, to be overseen by lawyers at the US firm Covington & Burling, is the second external review commissioned by Microsoft into the use of its technology by the Israeli military. The first was launched this year amid dissent within the company and reports by the Guardian and others about Israel's reliance on the company's technology during its offensive in Gaza. Announcing the review's findings in May, Microsoft said it had 'found no evidence to date' the Israeli military had failed to comply with its terms of service or used Azure 'to target or harm people' in Gaza. However, the recent Guardian investigation prompted concerns among senior Microsoft executives about whether some of its Israel-based employees may have concealed information about how Unit 8200 uses Azure when questioned as part of the review. Microsoft said on Friday the new inquiry would expand on the earlier one, adding: 'Microsoft appreciates that the Guardian's recent report raises additional and precise allegations that merit a full and urgent review.' The company is also facing pressure from a worker-led campaign group, No Azure for Apartheid, which has accused it of 'complicity in genocide and apartheid' and demanded it cut off 'all ties to the Israeli military' and make them publicly known. Since the Guardian and its partners, +972 and Local Call, revealed Unit 8200's sweeping surveillance project last week, Microsoft has been scrambling to assess what data the unit holds in Azure. Several Microsoft sources familiar with internal deliberations said the company's leadership was concerned by information from Unit 8200 sources interviewed for the article, including claims that intelligence drawn from repositories of phone calls held in Azure had been used to research and identify bombing targets in Gaza. Israel's 22-month bombardment of the territory, launched after the Hamas-led attack on 7 October 2023, has killed more than 60,000 people, the majority of them civilians, according to the health authority in the territory, though the actual death toll is likely to be significantly higher. Senior Microsoft executives had in recent days considered an awkward scenario in which Unit 8200, an important and sensitive customer, could be in breach of the company's terms of service and human rights commitments, sources said. If you have something to share about this story, you can contact Harry Davies and Yuval Abraham using one of the following methods. Secure Messaging in the Guardian app The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said. If you don't already have the Guardian app, download it (iOS/Android) and go to the menu. Select 'Secure Messaging'. To send a message to Harry and Yuval please choose the 'UK Investigations' team. Signal Messenger You can message Harry using the Signal Messenger app. Use the 'find by username' option and type hfd.32 Email (not secure) If you don't need a high level of security or confidentiality you can email SecureDrop and other secure methods If you can safely use the tor network without being observed or monitored you can send messages and documents to the Guardian via our SecureDrop platform. Finally, our guide at lists several ways to contact us securely, and discusses the pros and cons of each. According to leaked files reviewed by the Guardian, the company was aware as early as late 2021 that Unit 8200 planned to move large volumes of sensitive and classified intelligence data into Azure. At Microsoft's headquarters in November that year, senior executives – including its chief executive, Satya Nadella – attended a meeting during which Unit 8200's commander discussed a plan to move as much as 70% of its data into the cloud platform. The company has said its executives, including Nadella, were not aware Unit 8200 planned to use or ultimately used Azure to store the content of intercepted Palestinian calls. 'We have no information related to the data stored in the customer's cloud environment,' a spokesperson said last week. An Israeli military spokesperson has previously said its work with companies such as Microsoft is 'conducted based on regulated and legally supervised agreements' and the military 'operates in accordance with international law'. The new inquiry will examine the military's commercial agreements with Microsoft. Once completed, the company will 'share with the public the factual findings that result from this review', its statement said.

Fact check: Google Lens's AI overviews shared misleading information
Fact check: Google Lens's AI overviews shared misleading information

South Wales Guardian

time8 hours ago

  • South Wales Guardian

Fact check: Google Lens's AI overviews shared misleading information

The AI overviews of searches with Google Lens have been giving users false and misleading information about certain images being shared widely on social media, a Full Fact investigation has revealed. This has happened for videos supposedly relating to the wars in Ukraine and Gaza, the India-Pakistan conflict, the June 2025 Air India plane crash and small boat arrivals in the UK. Full Fact ran a number of searches for screenshots of key moments of misleading videos which we've fact checked in recent months using Google Lens, and found the AI overviews for at least 10 of these clips failed to recognise inauthentic content or otherwise shared false claims about what the images showed. In four examples, the AI overviews repeated the false claims we saw shared with these clips on social media – claims which Full Fact has debunked. We also found AI overviews changed with each search, even when searching the same thing, so we often weren't able to generate identical or consistent responses. Google Lens is a visual search tool that analyses images – including stills from videos – and can surface similar pictures found online, as well as text or objects that relate to the image. According to Google, the AI overviews which sometimes appear at the top of Google Lens search results bring together 'the most relevant information from across the web' about the image, including supporting links to related pages. These AI overviews do have a note at the bottom saying: 'AI responses may include mistakes'. This note links to a page that says: 'While exciting, this technology is rapidly evolving and improving, and may provide inaccurate or offensive information. AI Overviews can and will make mistakes.' When we asked Google about the errors we identified, a spokesperson said they were able to reproduce some of them, and that they were caused by problems with the visual search result, rather than the AI overviews themselves. They said the search results surface web sources and social media posts that combine the visual match with false information, which then informs the AI overview. A Google spokesperson told us: 'We aim to surface relevant, high quality information in all our Search features and we continue to raise the bar for quality with ongoing updates and improvements. When issues arise – like if our features misinterpret web content or miss some context – we use those examples to improve and take appropriate action under our policies.' They added that the AI overviews are backed by search results, and claimed they rarely 'hallucinate'. Hallucination in this context refers to when a model generates false or conflicting information, often presented confidently, although there is some disagreement over the exact definition. Even if AI overviews are not the source of the problem, as Google argues, they are still spreading false and misleading information on important and sensitive subjects. Miscaptioned footage We found several instances of AI overviews repeating claims debunked by Full Fact about real footage miscaptioned on social media. For example, a viral video claimed to show asylum seekers arriving in Dover in the UK, but this isn't true – it actually appears to show crowds of people on a beach in Goa, India. Despite this, the AI overview generated when we searched a still from this footage repeated the false claim, saying: 'The image depicts a group of people gathered on Dover Beach, a pebble beach on the coast of England.' Another clip circulated on social media with claims it showed the Air India plane that crashed in Ahmedabad, India, on June 12. The AI overview for a key frame similarly said: 'The image shows an Air India Boeing 787 Dreamliner aircraft that crashed shortly after takeoff from Ahmedabad, India, on June 12, 2025, while en route to London Gatwick.' But this is false – the footage shows a plane taking off from Heathrow in May 2024. Footage almost certainly generated with AI In June, we wrote about a video shared on social media with claims it shows 'destroyed Russian warplanes' following Ukraine's drone attacks on Russian aircraft. But the clip is not real, and was almost certainly generated with artificial intelligence. When searching multiple key frames from the footage with Google Lens, we were given several different AI overviews – none of which mentioned that the footage is not real and is likely to be AI-generated. The overview given for one screenshot said: 'The image shows two damaged warplanes, possibly Russian, on a paved surface. Recent reports indicate that multiple warplanes have exploded, including Russian aircraft that attacked a military base in Siberia.' This overview supports the false claim circulating on social media that the video shows damaged Russian warplanes, and while it's true that aircraft at Russia's Belaya military base in Siberia were damaged in that Ukrainian attack, it doesn't make sense to suggest that Russian aircraft attacked a military base in Siberia, which is mostly Russian. AI overviews given for other screenshots of the clip wrongly claimed 'the image shows the remains of several North American F-82 Twin Mustang aircraft'. F-82s were used by the US Air Force but were retired in 1953. They also had a distinct design, with parallel twin cockpits and single tail sections, which doesn't match any of the planes depicted in the likely AI-generated video. Footage from a video game Gameplay footage from the military simulation game Arma 3 often circulates on social media with claims it shows genuine scenes from conflict. We found several instances when Google Lens's AI overviews failed to distinguish key frames of these clips from real footage, and instead appeared to hallucinate specific scenarios loosely relating to global events. For example, one Arma 3 clip was shared online with false claims it showed Israeli helicopters being shot down over Gaza. When we searched a key frame with Google Lens, amid Israel-Iran air strikes following Israel's attack on Iranian nuclear infrastructure in June, the AI overview said it showed 'an Israeli Air Force (IAF) fighter jet deploying flares, likely during the recent strikes on Iran'. But the overview did not say that the footage is not real. Another Arma 3 clip was shared amid conflict between India and Pakistan in May with false claims it showed Pakistan shooting down an Indian Air Force Rafale fighter jet near Bahawalpur in Pakistan. The AI overview said the image showed 'a Shenyang J-35A fighter jet, recently acquired by the Pakistan Air Force from China'. While there have been recent reports of Pakistan Air Force acquiring some of these Chinese fighter jets, this is not what the footage shows and the AI overview did not say it was from a video game. Use with caution Google Lens is an important tool and often the first thing fact checkers use when trying to verify footage, and we've encouraged the public to use it too. This makes the inaccuracy of Google Lens's AI overviews concerning, especially given that the information features prominently at the top of people's search results, meaning false or misleading claims could be the first thing people see. Full disclosure: Full Fact has received funding from Google and Google's charitable foundation. You can see more details about the funding Full Fact receives here. We are editorially independent and our funders have no editorial control over our content.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store