
AI and drones still need help from humans to find missing flood victims
Recent successes in applying computer vision and machine learning to drone imagery for rapidly determining building and road damage after hurricanes or shifting wildfire lines suggest that artificial intelligence could be valuable in searching for missing persons after a flood.
Machine learning systems typically take less than one second to scan a high-resolution image from a drone, versus one to three minutes for a person. Plus, drones often produce more imagery to view than is humanly possible during the critical first hours of a search, when survivors may still be alive.
Unfortunately, today's AI systems are not up to the task.
We are robotics researchers who study the use of drones in disasters. Our experiences searching for victims of flooding and numerous other events show that current implementations of AI fall short.
However, the technology can play a role in searching for flood victims. The key is AI-human collaboration.
AI's potential
Searching for flood victims is a type of wilderness search and rescue that presents unique challenges. The goal for machine learning scientists is to rank which images have signs of victims and to indicate where in those images search-and-rescue personnel should focus. If the responder sees signs of a victim, they pass the GPS location in the image to search teams in the field to check.
The ranking is done by a classifier, which is an algorithm that learns to identify similar instances of objects—cats, cars, trees—from training data in order to recognize those objects in new images. For example, in a search-and-rescue context, a classifier would spot instances of human activity, such as garbage or backpacks, to pass on to wilderness search-and-rescue teams, or even identify the missing person themselves.
A classifier is needed because of the sheer volume of imagery that drones can produce. For example, a single 20-minute flight can produce over 800 high-resolution images. If there are 10 flights—a small number—there would be over 8,000 images. If a responder spends only 10 seconds looking at each image, it would take over 22 hours of effort. Even if the task is divided among a group of 'squinters,' humans tend to miss areas of images and show cognitive fatigue.
The ideal solution is an AI system that scans the entire image, prioritizes images that have the strongest signs of victims, and highlights the area of the image for a responder to inspect. It could also decide whether the location should be flagged for special attention by search-and-rescue crews.
Where AI falls short
While this seems to be a perfect opportunity for computer vision and machine learning, modern systems have a high error rate. If the system is programmed to overestimate the number of candidate locations in hopes of not missing any victims, it will likely produce too many false candidates. That would mean overloading squinters or, worse, the search-and-rescue teams, which would have to navigate through debris and muck to check the candidate locations.
Developing computer vision and machine learning systems for finding flood victims is difficult for three reasons.
One is that while existing computer vision systems are certainly capable of identifying people visible in aerial imagery, the visual indicators of a flood victim are often very different compared with those for a lost hiker or fugitive. Flood victims are often obscured, camouflaged, entangled in debris, or submerged in water. These visual challenges increase the possibility that existing classifiers will miss victims.
Second, machine learning requires training data, but there are no datasets of aerial imagery where humans are tangled in debris, covered in mud, and not in normal postures. This lack also increases the possibility of errors in classification.
Third, many of the drone images often captured by searchers are oblique views, rather than looking straight down. This means the GPS location of a candidate area is not the same as the GPS location of the drone. It is possible to compute the GPS location if the drone's altitude and camera angle are known, but unfortunately, those attributes rarely are. The imprecise GPS location means teams have to spend extra time searching.
How AI can help
Fortunately, with humans and AI working together, search-and-rescue teams can successfully use existing systems to help narrow down and prioritize imagery for further inspection.
In the case of flooding, human remains may be tangled among vegetation and debris. Therefore, a system could identify clumps of debris big enough to contain remains. A common search strategy is to identify the GPS locations of where flotsam has gathered, because victims may be part of these same deposits.
An AI classifier could find debris commonly associated with remains, such as artificial colors and construction debris with straight lines or 90-degree corners. Responders find these signs as they systematically walk the riverbanks and flood plains, but a classifier could help prioritize areas in the first few hours and days, when there may be survivors, and later could confirm that teams didn't miss any areas of interest as they navigated the difficult landscape on foot.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
12 minutes ago
- Yahoo
Spear AI raises first round of funding to apply AI to submarine data
By Stephen Nellis SAN FRANCISCO (Reuters) -A startup founded by U.S. Navy veterans aiming to help the U.S. military use artificial intelligence to decipher data gathered by submarines has raised its first round of outside capital. Washington-based Spear AI specializes in working with what is known as passive acoustic data, which is gathered by listening devices underwater. Its long-term aim is to use AI to help submarine operators understand whether an object heard could be a rain squall, a whale, or a vessel that could be a threat, and to detect where it is and how fast it is moving. The challenge is that most existing AI tools are trained on data such as words or images that have been painstakingly labeled and organized over years or decades by companies such as Scale AI, which recently signed a $14.8-billion deal with Meta Platforms. Data from acoustic sensors is different. Spear AI co-founders Michael Hunter, a former U.S. Navy SEAL analyst, and John McGunnigle, a former nuclear submarine commander for the U.S. Navy, are building a hardware and software platform that aims to prepare that data for AI algorithms. The company sells sensors that can be attached to buoys or vessels and a software tool to help label and sort the data gathered by the sensors to make it ready to be put into AI systems. The U.S. Navy this month awarded Spear AI a $6-million contract for its data-labeling tool. Spear AI, founded in 2021, has been self-funded and has about 40 employees. Hunter, the CEO, said it raised $2.3 million from AI-focused venture firm Cortical Ventures and private equity firm Scare the Bear. The funding will be used to double the company's headcount to support its government contracts and commercial business prospects, such as monitoring underwater pipelines and cables. Hunter said Spear AI also aims to sell consulting services, a model similar to defense tech firm Palantir. "We wanted to build the product and actually get it out the door before the contract came in to get it," Hunter told Reuters. "The only way you can do that is with private capital." Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
12 minutes ago
- Yahoo
Nvidia Reaches New Peak as Google Lifts Cloud Spending Forecast
July 25 - Nvidia (NASDAQ:NVDA) hit a fresh intraday high Friday, edging up 0.5% to $174.53 in early trading, on track to close at a record if it holds. The GPU pioneer gained momentum after a 1.7% pop Thursday, as investors brace for a wave of tech earnings next week. Warning! GuruFocus has detected 4 Warning Signs with NVDA. Wall Street sees Nvidia chips as the go?to for AI model training, and expectations remain lofty. Alphabet (NASDAQ:GOOGL) underlined that demand by lifting its 2025 capital?expenditure forecast by 13% to $85 billion, signaling more server and data?center builds, including Nvidia's gear. While Google touts its custom TPUs, it still backs GPUs to meet broader cloud needs. Ben Reitzes of Melius Research states that Google Cloud is currently capacity constrained but anticipates subsequent growth in the second half because Google can deploy more capacity. Such a dynamic would fuel a long-term market of Nvidia sales and other conglomerates, such as AMD (NASDAQ:AMD), which would be able to sustain an AI-fueled GPU demand. As significant Nvidia reports are still ahead, its stocks might remain unstable, but the boom in AI infrastructure does not appear to have reached its limit. This article first appeared on GuruFocus. Sign in to access your portfolio


Bloomberg
12 minutes ago
- Bloomberg
US and China Compete for AI Lead
General Catalyst Institute Founding President Teresa Carlson discusses how the US AI Action Plan can help startups as well as large companies compete globally. She joins Caroline Hyde and Ed Ludlow on "Bloomberg Tech." (Source: Bloomberg)