How a nonprofit's AI tool is giving aid workers life-saving answers during humanitarian crises
For "CXO AI Playbook," Business Insider takes a look at mini case studies about AI adoption across industries, company sizes, and technology DNA. We've asked each of the featured companies to tell us about the problems they're trying to solve with AI, who's making these decisions internally, and their vision for using AI in the future.
Founded in 1979, Mercy Corps is a global humanitarian aid organization based in Portland, Oregon. It operates in more than 40 countries, and has roughly 4,000 employees supporting communities affected by poverty, disaster, conflict, and the climate crisis. The majority of its staff members are from the countries where they work.
Situation analysis: What problem was the organization trying to solve?
In the developing world, agricultural crises like droughts, crop failures, and loss of livestock can rapidly escalate into humanitarian crises. Mercy Corps has experience anticipating these emergencies and reducing their impact. But a lack of timely, reliable data often prevents that knowledge from reaching the right people at the right time.
Alicia Morrison, the director of data science at Mercy Corps, saw potential in generative AI for getting relevant information into the hands of decision-makers more quickly.
The goal was to build a tool that could provide aid workers with quick, reliable answers to the day-to-day questions they face in the field. The answers would be based on past projects, research, and proven approaches, and include links to sources and citations so workers can know where the information comes from.
"Making that tool available to the people doing the work helps them learn from what's been done and imagine new possibilities," she told Business Insider. "That's when we get the most creative ideas and uses of information."
Key staff and partners
Mercy Corps took part in Tech To the Rescue's AI for Changemakers program, a global accelerator that helps nonprofits experiment with AI. Through intensive, short-term training programs, Tech To the Rescue gives organizations a chance to pitch AI ideas and connect with private sector partners who can help bring them to life.
Mercy Corps matched with Cloudera, a software company focused on data management, analytics, and AI. "They had the idea and we believed we could contribute our time, resources, and skills and add value," said Rob Dickens, a solutions architect at Cloudera.
Cloudera donated engineering time and platform credits to develop the product, which is called the AI Methods Matcher. Dickens said development took about seven weeks, and the tool runs on Cloudera's AI Inference service, which uses Nvidia technology.
AI in action
Methods Matcher uses a type of generative AI called retrieval-augmented generation. It draws on an archive of successful projects to search for relevant information, summarize it, and offer recommendations. Now, decisions that aid workers make on the ground — from calculating vegetation health to tracking fertilizer distribution — can be guided by data.
Morrison said the tool speeds up decision-making by reducing the time and manual research required to analyze large volumes of information. With Methods Matcher, Mercy Corps' teams can identify actions that have worked elsewhere and get evidence-based suggestions in real time.
For example, in countries facing severe inflation, Mercy Corps often provides multipurpose cash assistance. But the organization needs to know the purchasing power of that cash to make an impact. In this case, an aid worker in the field might ask the tool, "How do I determine how much cash aid to give people in a region with rising inflation?"
Methods Matcher responds with a tailored answer based on past Mercy Corps projects and research. Aid workers can ask follow-up questions in the same session, and because the tool "remembers" the conversation history, they can build on earlier questions without having to start over.
The tool helps teams in the field quickly access information without waiting for support from HQ. "They can see for themselves how valuable this kind of information can be," Morrison said.
Did it work, and how did leaders know?
Since the tool's launch in November 2024, Morrison said that while they have yet to report metrics on the tool's impact, there has been strong early adoption among field teams. Mercy Corps is now working with Cloudera to expand Methods Matcher, develop new AI tools, and build data literacy across the organization.
It's also gathering feedback on Methods Matcher from staff to understand what's working and what needs improvement.
"We're a nonprofit, so we don't have a big team of in-house AI experts," Morrison said. "We're learning as we go — figuring out how to maintain these tools, how to evaluate them, and how to get people across the organization on board for the long haul."
What's next?
Mercy Corps has experienced a significant shift in funding in recent months, but Morrison said Methods Matcher and other AI tools remain "a priority investment area." She added that the organization will continue to improve based on team feedback.
Dickens said Cloudera plans to bring agentic AI into the tool through its Agent Studio, automating tasks like gathering real-time data, analyzing trends, and generating reports or recommendations. This will allow Methods Matcher to surface relevant news and social media reports from affected areas, making it more responsive to events on the ground.
"Aid workers will get richer, real-time context instead of manually compiling daily or monthly reports," he said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
22 minutes ago
- Yahoo
MongoDB Soars 14.2% After Crushing Q1 -- $1B Buyback, AI Push, and Customer Surge Spark Rally
MongoDB (NASDAQ:MDB) is off to a fast start in fiscal 2026and investors might want to take a closer look. The company reported $549 million in Q1 revenue, up 22% from last year, with its cloud product, Atlas, growing 26% and now making up 72% of total sales. Management added 2,600 new customers, marking the biggest quarterly gain in six years. The share is up 14.2% at 12.09pm today. CEO Dev Ittycheria pointed to strong traction from both enterprises and startups as AI workloads and modern app development continue to drive demand for flexible, cloud-native databases. Behind the scenes, MongoDB is becoming a cash machine. The company more than doubled non-GAAP operating income to $87.4 million and posted $105.9 million in free cash flowup 74% year-over-year. With $2.5 billion in cash and short-term investments on hand, it just authorized another $800 million in share repurchases, taking its total buyback program to $1 billion. That kind of financial firepower could give MongoDB more room to support long-term growth while returning capital to shareholders. On the AI front, MongoDB isn't just playing defense. It rolled out new retrieval modelsVoyage 3.5 and 3.5 Litethat improve accuracy and efficiency for building AI-powered apps. It also debuted its Model Context Protocol Server, which connects MongoDB to tools like GitHub Copilot and Anthropic's Claude, letting developers use natural language to interact with their data. With FY2026 revenue guidance raised up to $2.29 billion and full-year non-GAAP EPS projected to hit as high as $3.12, MongoDB could be shaping up as a quiet leader in the AI infrastructure race. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


The Verge
32 minutes ago
- The Verge
Anthropic launches new Claude service for military and intelligence use
Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information. The company said the models it's announcing 'are already deployed by agencies at the highest level of U.S. national security,' and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use. Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic's blog post. And although the company said they 'underwent the same rigorous safety testing as all of our Claude models,' the models have certain specifications for national security work. For example, they 'refuse less when engaging with classified information' that's fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov's models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There's been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there's also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement. Anthropic's usage policy specifically dictates that any user must 'Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,' including using Anthropic's products or services to 'produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.' At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are 'carefully calibrated to enable beneficial uses by carefully selected government agencies.' Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to 'tailor use restrictions to the mission and legal authorities of a government entity,' although it will aim to 'balance enabling beneficial uses of our products and services with mitigating potential harms.' Claude Gov is Anthropic's answer to ChatGPT Gov, OpenAI's product for U.S. government agencies, which it launched in January. It's also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape. When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir's FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it's expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.

Wall Street Journal
34 minutes ago
- Wall Street Journal
No, AI Robots Won't Take All Our Jobs
Anthropic CEO Dario Amodei said last week that artificial intelligence could eliminate half of all entry-level white-collar jobs within five years and cause unemployment to skyrocket to as high as 20%. He should know better—as should many other serious academics, who have been warning for years that AI will mean the end of employment as we know it. In 2013 Carl Benedikt Frey and Michael A. Osborne of Oxford University produced a research paper estimating that 47% of U.S. employment was at risk of being eliminated by new technologies.