As rising conflict and forced displacement drive unprecedented humanitarian needs, adoption of artificial intelligence in humanitarian work has the potential to scale services for some of the most vulnerable populations in the world.
Since 2015, Signpost — a program founded by the International Rescue Committee and Mercy Corps, which creates digital help centers to aid people affected by conflict, disasters, poverty, and violence — has provided verified, localized information to more than 20 million people in 30 countries. Through websites, Facebook, WhatsApp, and other widely used platforms, it answers questions about aid access, health services, legal rights, and documentation, and connects users to human caseworkers when needed.
As demand for services grew, so did the volume of questions. Staff were inundated with repetitive but urgent requests: Where do I register? What documents do I need? Where can I find shelter?
Early AI pilots improved productivity by helping staff draft responses and categorize inquiries. But the systems required careful human review. We work in environments where mistakes have real consequences, and “move fast and break things” is not an option.
The goal is not to replace human judgment, but to reserve it for complex or sensitive cases.”
Instead of deploying generic chatbots, we began building purpose-designed AI agents with clear boundaries. These agents are trained on vetted, localized information and have defined escalation pathways to human staff. We test and evaluate extensively before launching any system.
In the Rohingya refugee camps in Cox’s Bazar, Bangladesh, Uzma lost access to food assistance after her name was dropped from a data card. For months, she and her young son survived by borrowing from neighbors, unsure where to turn.
Desperate for help, Uzma was referred to InfoSheba, a Bangladeshi version of Signpost. InfoSheba quickly investigated and escalated her case to a case manager; by the end of the day, the case management team got the data office to confirm that her name had been reinstated, ensuring she would receive food rations again.
This is what effective information can do in a humanitarian crisis: connect people to the systems that determine whether they receive help. The goal is not to replace human judgment, but to reserve it for complex or sensitive cases.
Over time, Signpost has become more than an information service. It has become a teachable foundation for how we apply AI for humanitarian good, combining trusted data, user-centered design, and deep domain expertise to reach people with the right information, in the right way.

Singpost
In northeastern Nigeria, Yagana, a primary school teacher, relies on WhatsApp. Some of her nearly 100 students are displaced, textbooks are scarce. Many are years behind in literacy and numeracy after repeated disruptions caused by conflict, and keeping them engaged is a challenge.
She types a question into a chatbot called aprendIA: “How can I help students learn to count in a fun way?” Within seconds, she receives practical, step-by-step guidance — taking into account the size of her class, the challenges of multilingual classrooms, and other contexts she has shared with the bot before.
She also uses aprendIA to learn how to manage students’ behavior effectively, which locally available materials can make lessons interactive, and how to adapt exercises to an overcrowded classroom. She is able to complete a short training module on classroom management, delivered in small segments through WhatsApp.
This project is part of a growing partnership with education authorities in Borno, Adamawa, and Yobe states. What began with roughly 500 teachers now reaches more than 4,700, and is expected to expand to over 22,000 by the end of 2026.
AprendIA uses the Signpost AI platform to deliver evidence-based, bite-sized learning opportunities and just-in-time support to teachers who need it most.
Beyond conflict zones
What we’re learning from tools like aprendIA is that AI isn’t a future solution for humanitarian needs — it’s already here. The question is whether it can deliver trusted, accurate, locally relevant information in fragile contexts, and do so safely.
Humanitarian AI is useful beyond conflict zones. In the U.S., refugees and other newcomers confront a maze of documentation requirements, employment eligibility rules, housing systems, and school enrollment processes. Missing a deadline or misunderstanding a requirement can have lasting consequences.
Using the Signpost AI platform, and drawing on user journey and pedagogical expertise from the aprendIA team, the IRC’s resettlement program experts designed Alma, a multilingual virtual assistant that helps newcomers navigate these systems, and delivers the curriculum otherwise provided by case workers.
The question is whether AI will be shaped by the realities of crisis-affected communities, or deployed without regard for context.
Users can ask: How do I apply for a work permit? What documents are required for a driver’s license? How can I enroll my child in school? Alma provides step-by-step guidance drawn from verified sources and explains terminology in plain language. When a case becomes complex, it routes users to a human adviser.
When Amina arrived alone in the U.S. from Afghanistan, she faced an overwhelming system — immigration paperwork, housing contracts, job applications — all in unfamiliar language and structure. “I was hopeless,” she said. “I didn’t know where to start.”
After initial support from an IRC caseworker, Amina was connected to Alma. At first skeptical, she began asking practical questions — when could she apply for a green card, where to find legal help, how to navigate housing support. The chatbot responded with clear, step-by-step guidance tailored to her location. Acting on that information, she met with accredited lawyers, secured rental assistance, and began her immigration process.
“It was like having a knowledgeable assistant in my hand, 24/7,” she said. “I wasn’t alone anymore.”
In humanitarian work, reliability matters more than novelty. Before an AI agent can respond directly to users, it must meet strict quality and safety standards through exhaustive testing and evaluation. It must recognize when it does not know the answer. It must escalate appropriately. Even after deployment, evaluation continues — tracking response quality, user satisfaction, efficiency gains, and unintended effects.
If a system does not meet standards, it is revised or scrapped. This does not result in flashy frontier AI; it produces something less glamorous and more important: tools that are predictably useful, and safe.
Humanitarian needs are rising globally, even as resources shrink. AI is already reshaping how aid is delivered. The question is whether it will be shaped by the realities of crisis-affected communities, or deployed without regard for context.
The measure of success is not how impressive the technology appears — it is whether it helps people make better decisions in moments of uncertainty.
