
AI and drones still need help from humans to find missing flood victims
Recent successes in applying computer vision and machine learning to drone imagery for rapidly determining building and road damage after hurricanes or shifting wildfire lines suggest that artificial intelligence could be valuable in searching for missing persons after a flood.
Machine learning systems typically take less than one second to scan a high-resolution image from a drone, versus one to three minutes for a person. Plus, drones often produce more imagery to view than is humanly possible during the critical first hours of a search, when survivors may still be alive.
Unfortunately, today's AI systems are not up to the task.
We are robotics researchers who study the use of drones in disasters. Our experiences searching for victims of flooding and numerous other events show that current implementations of AI fall short.
However, the technology can play a role in searching for flood victims. The key is AI-human collaboration.
AI's potential
Searching for flood victims is a type of wilderness search and rescue that presents unique challenges. The goal for machine learning scientists is to rank which images have signs of victims and to indicate where in those images search-and-rescue personnel should focus. If the responder sees signs of a victim, they pass the GPS location in the image to search teams in the field to check.
The ranking is done by a classifier, which is an algorithm that learns to identify similar instances of objects—cats, cars, trees—from training data in order to recognize those objects in new images. For example, in a search-and-rescue context, a classifier would spot instances of human activity, such as garbage or backpacks, to pass on to wilderness search-and-rescue teams, or even identify the missing person themselves.
A classifier is needed because of the sheer volume of imagery that drones can produce. For example, a single 20-minute flight can produce over 800 high-resolution images. If there are 10 flights—a small number—there would be over 8,000 images. If a responder spends only 10 seconds looking at each image, it would take over 22 hours of effort. Even if the task is divided among a group of 'squinters,' humans tend to miss areas of images and show cognitive fatigue.
The ideal solution is an AI system that scans the entire image, prioritizes images that have the strongest signs of victims, and highlights the area of the image for a responder to inspect. It could also decide whether the location should be flagged for special attention by search-and-rescue crews.
Where AI falls short
While this seems to be a perfect opportunity for computer vision and machine learning, modern systems have a high error rate. If the system is programmed to overestimate the number of candidate locations in hopes of not missing any victims, it will likely produce too many false candidates. That would mean overloading squinters or, worse, the search-and-rescue teams, which would have to navigate through debris and muck to check the candidate locations.
Developing computer vision and machine learning systems for finding flood victims is difficult for three reasons.
One is that while existing computer vision systems are certainly capable of identifying people visible in aerial imagery, the visual indicators of a flood victim are often very different compared with those for a lost hiker or fugitive. Flood victims are often obscured, camouflaged, entangled in debris, or submerged in water. These visual challenges increase the possibility that existing classifiers will miss victims.
Second, machine learning requires training data, but there are no datasets of aerial imagery where humans are tangled in debris, covered in mud, and not in normal postures. This lack also increases the possibility of errors in classification.
Third, many of the drone images often captured by searchers are oblique views, rather than looking straight down. This means the GPS location of a candidate area is not the same as the GPS location of the drone. It is possible to compute the GPS location if the drone's altitude and camera angle are known, but unfortunately, those attributes rarely are. The imprecise GPS location means teams have to spend extra time searching.
How AI can help
Fortunately, with humans and AI working together, search-and-rescue teams can successfully use existing systems to help narrow down and prioritize imagery for further inspection.
In the case of flooding, human remains may be tangled among vegetation and debris. Therefore, a system could identify clumps of debris big enough to contain remains. A common search strategy is to identify the GPS locations of where flotsam has gathered, because victims may be part of these same deposits.
An AI classifier could find debris commonly associated with remains, such as artificial colors and construction debris with straight lines or 90-degree corners. Responders find these signs as they systematically walk the riverbanks and flood plains, but a classifier could help prioritize areas in the first few hours and days, when there may be survivors, and later could confirm that teams didn't miss any areas of interest as they navigated the difficult landscape on foot.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
26 minutes ago
- Yahoo
Amazon shuts down Shanghai AI research lab, FT says
(Reuters) -Amazon is shutting down its Shanghai artificial intelligence lab, the Financial Times reported on Wednesday. Reuters could not immediately confirm the report. Amazon did not immediately respond to a request for comment outside regular business hours. Amazon's decision to shut the lab comes amidst rising tensions between Washington and Beijing, with the U.S. increasing its scrutiny of American companies operating in China. Wang Minjie, a scientist in the Shanghai lab, said his team was "being dissolved due to strategic adjustments amid US-China tensions," the newspaper said, citing a post on WeChat. Amazon Web Services (AWS) set up its Shanghai Lab in 2018. While the headcount at the AWS Shanghai research lab is unclear, the FT report said, AWS at its peak had more than 1,000 staff in China. The report comes as the tech giant slashes jobs globally, joining a growing list of firms, including Microsoft and Meta, who have announced layoffs this year as they increase their reliance on artificial intelligence. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

an hour ago
From tech podcasts to policy: Trump's new AI plan leans heavily on Silicon Valley industry ideas
An artificial intelligence agenda that started coalescing on the podcasts of Silicon Valley billionaires is now being forged into U.S. policy as President Donald Trump leans on the ideas of the tech figures who backed his election campaign. Trump on Wednesday is planning to reveal an 'AI Action Plan' he ordered after returning to the White House in January. He gave his tech advisers six months to come up with new AI policies after revoking President Joe Biden's signature AI guardrails on his first day in office. The unveiling is co-hosted by the bipartisan Hill and Valley Forum and the All-In Podcast, a business and technology show hosted by four tech investors and entrepreneurs who include Trump's AI czar, David Sacks. The plan and related executive orders are expected to include some familiar tech lobby pitches. That includes accelerating the sale of AI technology abroad and making it easier to construct the energy-hungry data center buildings that are needed to form and run AI products, according to a person briefed on Wednesday's event who was not authorized to speak publicly and spoke on condition of anonymity. It might also include some of the AI culture war preoccupations of the circle of venture capitalists who endorsed Trump last year. Countering the liberal bias they see in AI chatbots such as ChatGPT or Google's Gemini has long been a rallying point for the tech industry's loudest Trump backers. Sacks, a former PayPal executive and now Trump's top AI adviser, has been criticizing 'woke AI' for more than a year, fueled by Google's February 2024 rollout of an AI image generator that, when asked to show an American Founding Father, created pictures of Black, Latino and Native American men. 'The AI's incapable of giving you accurate answers because it's been so programmed with diversity and inclusion,' Sacks said at the time. Google quickly fixed its tool, but the 'Black George Washington' moment remained a parable for the problem of AI's perceived political bias, taken up by X owner Elon Musk, venture capitalist Marc Andreessen, Vice President JD Vance and Republican lawmakers. The administration's latest push against 'woke AI' comes a week after the Pentagon announced new $200 million contracts with four leading AI companies, including Google, to address 'critical national security challenges.' Also receiving one of the contracts was Musk's xAI, which has been pitched as an alternative to 'woke AI' companies. The company has faced its own challenges: Earlier this month, xAI had to scramble to remove posts made by its Grok chatbot that made antisemitic comments and praised Adolf Hitler. Trump has paired AI's need for huge amounts of electricity with his own push to tap into U.S. energy sources, including gas, coal and nuclear. 'Everything we aspire to and hope for means the demand and supply of energy in America has to go up,' said Michael Kratsios, the director of the White House's Office of Science and Technology Policy, in a video posted Tuesday. Many tech giants are already well on their way toward building new data centers in the U.S. and around the world. OpenAI announced this week that it has switched on the first phase of a massive data center complex in Abilene, Texas, part of an Oracle-backed project known as Stargate that Trump promoted earlier this year. Amazon, Microsoft, Meta and xAI also have major projects underway. The tech industry has pushed for easier permitting rules to get their computing facilities connected to power, but the AI building boom has also contributed to spiking demand for fossil fuel production that will contribute to global warming. United Nations Secretary-General Antonio Guterres on Tuesday called on the world's major tech firms to power data centers completely with renewables by 2030. 'A typical AI data center eats up as much electricity as 100,000 homes,' Guterres said. 'By 2030, data centers could consume as much electricity as all of Japan does today.' It's long been White House policy under Republican and Democratic administrations to curtail certain technology exports to China and other adversaries on national security grounds. But much of the tech industry argued that Biden went too far at the end of his term in trying to restrict the exports of specialized AI computer chips to more than 100 other countries, including close allies. Part of the Biden administration's motivation was to stop China from acquiring coveted AI chips in third-party locations such as Southeast Asia or the Middle East, but critics said the measures would end up encouraging more countries to turn to China's fast-growing AI industry instead of the U.S. as their technology supplier. It remains to be seen how the Trump administration aims to accelerate the export of U.S.-made AI technologies while countering China's AI ambitions. California chipmakers Nvidia and AMD both announced last week that they won approval from the Trump administration to sell to China some of their advanced computer chips used to develop artificial intelligence. AMD CEO Lisa Su is among the guests planning to attend Trump's event Wednesday. There are sharp debates on how to regulate AI, even among the influential venture capitalists who have been debating it on their favorite medium: the podcast. While some Trump backers, particularly Andreessen, have advocated an 'accelerationist' approach that aims to speed up AI advancement with minimal regulation, Sacks has described himself as taking a middle road of techno-realism. 'Technology is going to happen. Trying to stop it is like ordering the tides to stop. If we don't do it, somebody else will,' Sacks said on the All-In podcast. On Tuesday, 95 groups including labor unions, parent groups, environmental justice organizations and privacy advocates signed a resolution opposing Trump's embrace of industry-driven AI policy and calling for a 'People's AI Action Plan' that would 'deliver first and foremost for the American people.' Amba Kak, co-executive director of the AI Now Institute, which helped lead the effort, said the coalition expects Trump's plan to come 'straight from Big Tech's mouth.' 'Every time we say, 'What about our jobs, our air, water, our children?' they're going to say, 'But what about China?'' she said in a call with reporters Tuesday. She said Americans should reject the White House's argument that the industry is overregulated and fight to preserve 'baseline protections for the public' as AI technology advances.

an hour ago
Teens say they are turning to AI for advice, friendship and 'to get out of thinking'
No question is too small when Kayla Chege, a high school student in Kansas, is using artificial intelligence. The 15-year-old asks ChatGPT for guidance on back-to-school shopping, makeup colors, low-calorie choices at Smoothie King, plus ideas for her Sweet 16 and her younger sister's birthday party. The sophomore honors student makes a point not to have chatbots do her homework and tries to limit her interactions to mundane questions. But in interviews with The Associated Press and a new study, teenagers say they are increasingly interacting with AI as if it were a companion, capable of providing advice and friendship. 'Everyone uses AI for everything now. It's really taking over,' said Chege, who wonders how AI tools will affect her generation. 'I think kids use AI to get out of thinking.' For the past couple of years, concerns about cheating at school have dominated the conversation around kids and AI. But artificial intelligence is playing a much larger role in many of their lives. AI, teens say, has become a go-to source for personal advice, emotional support, everyday decision-making and problem-solving. More than 70% of teens have used AI companions and half use them regularly, according to a new study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly. The study defines AI companions as platforms designed to serve as 'digital friends,' like or Replika, which can be customized with specific traits or personalities and can offer emotional support, companionship and conversations that can feel human-like. But popular sites like ChatGPT and Claude, which mainly answer questions, are being used in the same way, the researchers say. As the technology rapidly gets more sophisticated, teenagers and experts worry about AI's potential to redefine human relationships and exacerbate crises of loneliness and youth mental health. 'AI is always available. It never gets bored with you. It's never judgmental,' says Ganesh Nair, an 18-year-old in Arkansas. 'When you're talking to AI, you are always right. You're always interesting. You are always emotionally justified.' All that used to be appealing, but as Nair heads to college this fall, he wants to step back from using AI. Nair got spooked after a high school friend who relied on an 'AI companion' for heart-to-heart conversations with his girlfriend later had the chatbot write the breakup text ending his two-year relationship. 'That felt a little bit dystopian, that a computer generated the end to a real relationship,' said Nair. 'It's almost like we are allowing computers to replace our relationships with people.' In the Common Sense Media survey, 31% of teens said their conversations with AI companions were 'as satisfying or more satisfying' than talking with real friends. Even though half of teens said they distrust AI's advice, 33% had discussed serious or important issues with AI instead of real people. Those findings are worrisome, says Michael Robb, the study's lead author and head researcher at Common Sense, and should send a warning to parents, teachers and policymakers. The now-booming and largely unregulated AI industry is becoming as integrated with adolescence as smartphones and social media are. 'It's eye-opening,' said Robb. 'When we set out to do this survey, we had no understanding of how many kids are actually using AI companions.' The study polled more than 1,000 teens nationwide in April and May. Adolescence is a critical time for developing identity, social skills and independence, Robb said, and AI companions should complement — not replace — real-world interactions. 'If teens are developing social skills on AI platforms where they are constantly being validated, not being challenged, not learning to read social cues or understand somebody else's perspective, they are not going to be adequately prepared in the real world,' he said. The nonprofit analyzed several popular AI companions in a ' risk assessment,' finding ineffective age restrictions and that the platforms can produce sexual material, give dangerous advice and offer harmful content. The group recommends that minors not use AI companions. Researchers and educators worry about the cognitive costs for youth who rely heavily on AI, especially in their creativity, critical thinking and social skills. The potential dangers of children forming relationships with chatbots gained national attention last year when a 14-year-old Florida boy died by suicide after developing an emotional attachment to a chatbot. 'Parents really have no idea this is happening,' said Eva Telzer, a psychology and neuroscience professor at the University of North Carolina at Chapel Hill. 'All of us are struck by how quickly this blew up.' Telzer is leading multiple studies on youth and AI, a new research area with limited data. Telzer's research has found that children as young as 8 are using generative AI and also found that teens are using AI to explore their sexuality and for companionship. In focus groups, Telzer found that one of the top apps teens frequent is SpicyChat AI, a free role-playing app intended for adults. Many teens also say they use chatbots to write emails or messages to strike the right tone in sensitive situations. 'One of the concerns that comes up is that they no longer have trust in themselves to make a decision,' said Telzer. 'They need feedback from AI before feeling like they can check off the box that an idea is OK or not.' Arkansas teen Bruce Perry, 17, says he relates to that and relies on AI tools to craft outlines and proofread essays for his English class. 'If you tell me to plan out an essay, I would think of going to ChatGPT before getting out a pencil,' Perry said. He uses AI daily and has asked chatbots for advice in social situations, to help him decide what to wear and to write emails to teachers, saying AI articulates his thoughts faster. Perry says he feels fortunate that AI companions were not around when he was younger. 'I'm worried that kids could get lost in this,' Perry said. 'I could see a kid that grows up with AI not seeing a reason to go to the park or try to make a friend.' Other teens agree, saying the issues with AI and its effect on children's mental health are different from those of social media. 'Social media complemented the need people have to be seen, to be known, to meet new people,' Nair said. 'I think AI complements another need that runs a lot deeper — our need for attachment and our need to feel emotions. It feeds off of that.' 'It's the new addiction,' Nair added. 'That's how I see it.'