
Google admits Android's earthquake alerts failed ahead of deadly quake
TL;DR Google has acknowledged that its Android Earthquake Alerts system did not work accurately during the devastating 2023 Turkey earthquakes.
The system issued 500,000 lower-level 'Be Aware' notifications when it should have issued 10 million 'Take Action' alerts.
Google told the BBC that every earthquake early warning system grapples with algorithm tuning challenges.
Google has admitted to the BBC that its Android Earthquake Alerts (AEA) system failed to deliver timely and accurate warnings during the devastating 7.8 magnitude earthquake that hit Turkey in 2023.
According to the BBC's report, approximately 10 million people within 158 kilometers (98 miles) of the earthquake's epicenter could have received Google's highest-tier 'Take Action' alert, but the system only sent out 469 such warnings. Had the alerts functioned correctly, those at risk might have had up to 35 seconds of advance notice to seek safety.
Instead, Google said the system issued about 500,000 lower-level 'Be Aware' notifications, which are meant for less intense shaking and do not override a phone's Do Not Disturb settings. In contrast, the stronger 'Take Action' alert is designed to wake users and prompt them about the severity of the situation, which can be critical during nighttime or early morning events like this one.
Notably, Google had previously told the BBC that the system 'performed well' after a 2023 investigation into the matter. In a statement to Android Authority at the time, the company had noted the following:
Our system detected both major earthquakes and many aftershocks in Turkey. During a devastating earthquake event, numerous factors can affect whether users receive, notice, or act on a supplemental alert – including the specific characteristics of the earthquake and the availability of internet connectivity. Users may also not see or pay attention to an alert in the middle of the night or while prioritizing personal and family safety during significant natural disasters.
However, in recent findings published in Science magazine, Google admits 'limitations to the detection algorithms' caused the system to function poorly.
Android Earthquake Alerts were launched in 2020 and use data from smartphone accelerometers to crowdsource seismic activity detection. Available in nearly 100 countries, the system has reportedly detected over 18,000 earthquakes and sent millions of alerts. The goal is to give people a few crucial seconds to move away from dangerous situations before tremors strike.
The 2023 Turkey earthquake was one of the deadliest in recent history, claiming over 55,000 lives and injuring more than 100,000. Though Google's alert system was technically operational, it underestimated the severity of the event.
What went wrong?
According to the report, Google's earthquake warning system misjudged the intensity of the 7.8 magnitude quake as between 4.5 and 4.9. The system also underestimated a second major earthquake that struck later the same day. However, the second shock resulted in a slightly better alert response, with the system sending out around 8,158 'Take Action' notifications as well as nearly four million 'Be Aware' alerts.
Following the disaster, Google's engineers revisited their detection model. When they ran a simulation using updated algorithms, the system successfully generated 10 million 'Take Action' notifications for those near the epicenter and 67 million 'Be Aware' warnings for those further away.
'Every earthquake early warning system grapples with the same challenge — tuning algorithms for large magnitude events,' a Google spokesperson told the BBC.
So the lesson here is that while tools like Google's Android Earthquake Alerts (AEA) system can provide life-saving warnings and are a valuable part of modern Android smartphones, they might not always function as expected.
Follow

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
5 minutes ago
- The Verge
Google's NotebookLM can now make narrated slideshows with AI
Google's NotebookLM is getting a new Video Overviews feature that uses AI to create slideshows with narration. The feature is rolling out now in English, and Google says that support for 'more languages' is coming soon. 'You can think of these as a visual alternative to Audio Overviews: the AI host creates new visuals to help illustrate points while also pulling in images, diagrams, quotes and numbers from your documents,' according to a blog post. 'This makes it uniquely effective for explaining data, demonstrating processes and making abstract concepts more tangible.' Google plans to introduce 'additional formats' in the future. Based on a demo video, Video Overviews have handy playback controls like the ability to skip back and forth by 10 seconds and set playback speed. Google is also announcing updates to NotebookLM's Studio tab, which is where you can have the app generate things like Audio and Video Overviews, study guides, and briefing documents. The biggest change is that you'll be able to 'create and store multiple studio outputs of the same type in a single notebook,' meaning you can make multiple Audio Overviews all referencing information from the notebook you're working from. The Studio tab is getting a visual refresh, too — it will have four tiles at the top for making Audio Overviews, Video Overviews, Mind Maps, and Reports, Google says. The Studio changes will roll out 'over the next few weeks' to all users. Posts from this author will be added to your daily email digest and your homepage feed. See All by Jay Peters Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Google Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech

Engadget
34 minutes ago
- Engadget
Google is bringing image and PDF uploads to AI Mode
Google is updating AI Mode on desktop this week with the ability to process images, so you can ask it detailed questions about the pictures like you already can on mobile. In the coming weeks, the company is also adding support for PDF uploads on desktop in the US, which could help you digest lengthy course or work materials. You can ask AI Mode to summarize the documents for you and ask follow-up questions that it will then answer by cross-referencing the materials you uploaded with information available on the web. Google says AI Mode's responses will also include links to its references that you can visit in order to dig deeper. AI Mode will support additional file types for upload, including ones straight from your Google Drive, in the coming months as well. In addition to PDF upload support, Google is also rolling out a new Canvas feature that you can access if you're enrolled in the AI Mode Labs experiment in the US. You can use Canvas to consolidate all relevant information about a specific topic or for a specific purpose in a side panel that updates as you ask AI Mode more follow-up questions. If you're traveling, for instance, you can ask AI Mode to make you an itinerary and click the Create Canvas button. You'll be able to keep refining the itinerary with more questions, and you can always leave it alone for a while and come back to it later. AI Mode's Search Live is also getting video input on mobile this week, a feature Google announced at I/O 2025, after voice input arrived in June. To be able to access video input, you'll have to open Lens in the Google app and tap the Live icon before asking questions on what the camera sees. When Google revealed the feature during its annual developers' event, it said you could point the camera at a math problem, for example, and ask Search to help you solve it or to explain a concept you're having trouble understanding. Finally, with Lens in Chrome, you'll be able to ask AI Mode what's on your desktop screen. The company will roll out an "Ask Google about this page" dropdown option in the address bar "soon." When you click on it, AI Mode will create an overview with key information on what's being shown on your screen, whether it's a web page or a PDF.


Digital Trends
34 minutes ago
- Digital Trends
Google's AI Mode is quietly turning search into a productivity tool
Google Search has been one of the primary gateways to information on the internet, but it's about to evolving into something more. With the latest set of features being added to AI Mode, Search will no longer be just a tool for finding links or information, but an assistant that can help you organize, understand, and act on that information. Instead of just answering questions, AI Mode is being transformed into what seems to be a helpful workspace. It will soon support you through complex documents, explain visuals, and even help with multi-step tasks. With the addition of these new tools, Google is slowly changing how search works by helping you do more than just find information. What's new in AI Mode A handful of new capabilities are being introduced that expand what AI Mode can do particularly on desktop browsers, where users often juggle multiple tabs, files, and formats during more complex workflows. PDF and image uploads for context-aware queries AI Mode on desktop will now support uploading PDFs and images which will allow users to ask questions about the content in those files and receive web-informed, AI-generated responses. Recommended Videos Imagine having a research report or technical manual in front of you. Instead of searching for terms manually, you can now upload the file, highlight a section, and ask, 'Can you explain this further?' The AI will analyze the document and return contextually relevant explanations, along with links for deeper reading. Support for additional file types, including those from Google Drive, is expected in future updates expanding this capability to more kinds of content. Canvas for task planning and organization Another interesting addition is Canvas, that allows users to create and refine plans in a persistent, editable side panel. It's a tool designed for tasks that span multiple sessions like project planning, research outlines, trip itineraries, and more. The system will let you iterate in real time, ask AI Mode to draft a plan, make changes through follow-up prompts, and organize the results visually in the side panel. Users will also be able to upload their own files, like meeting notes, to help personalize the output. Canvas essentially helps you stay organized and make progress across sessions, documents, and devices. Search Live: AI conversations with visual input Perhaps the most technically ambitious update is Search Live, which integrates Google's camera-based Lens tool with AI Mode to deliver real-time, conversational help based on what your camera sees. Whether it's a diagram, a schematic, or a physical object, you can point your phone's camera at it and start a conversation. The AI interprets the visual data, offers insights, and even lets you refine questions, creating a kind of live tutoring or troubleshooting session, powered by AI and the web. This feature is based on Google's Project Astra work, and is being rolled out on mobile in the U.S. for users enrolled in the Labs experiment. AI Mode in Chrome: Smarter browsing, fewer tabs For desktop users, AI Mode is getting more closely integrated into Chrome. Soon, you will be able to click 'Ask Google about this page' from the address bar, which will launch Lens and the AI assistant to help you understand whatever is on your screen, whether it is a complex chart, a technical section, or a difficult diagram. You can even ask follow-up questions directly in the side panel, making it easier to explore a topic without switching tabs or starting a new search. This could change how people interact with web content and with Google itself. A more useful, less interruptive AI? There's no shortage of AI tools that come with a promise to boost productivity. But where many require a full switch of platforms or behavior, AI Mode is being embedded into existing habits including Search, Chrome, and Lens. Rather than pitching itself as a digital co-pilot or assistant with a personality, Google is trying to make AI Mode feel more like a context-aware layer for everyday digital tasks. Upload a file, ask a question, build a plan, check in later, all within the browser or the search bar. Google says that it is gradually rolling out these features to AI Mode, with some already available in early access for users who have joined the AI Mode Labs program.