SAFE AI
Standards and Assurance Framework for Ethical AI
AI has the ability to accelerate safe innovation and change humanitarian action for the better. Yet, there is a real risk of an underfunded, overstretched humanitarian sector accelerating towards unsafe use of AI to reduce costs with serious unintended consequences for vulnerable populations.
CDAC Network, The Alan Turing Institute and Humanitarian AI Advisory have partnered to launch the SAFE AI project: Standards and Assurance Framework for Ethical Artificial Intelligence. This initative, funded by the UK Foreign, Commonwealth & Development Office (FCDO), will develop a practical and useable foundational framework for enabling responsible AI in humanitarian action.
We're creating practical AI compliance and regulation guidelines, developing AI technological assurance tools to check if AI systems are fair and trustworthy, ensuring affected communities can participate (community-in-the-loop) and have a real say in how AI is used, and engaging with humanitarian organisations to build solutions that address their actual needs.
The governance gap in humanitarian AI: addressing the structural gap between global frameworks and operational reality
The first instalment of the SAFE AI Framework establishes the nature and scale of the humanitarian AI governance gap, why it matters and why individual agency policies cannot close it alone. It sets the analytical foundation for the full framework, arriving May 2026.
Click to read the briefing paper, or explore our other tools below.
-

Humanitarian AI Glossary: build better, safer and more effective partnerships.
-

How-to note: Co-designing AI solutions with crisis-affected communities
-

Policy brief: Addressing power dynamics in participatory AI for crisis-affected communities
-

Factsheet: Co-design vs. user-centred design for AI solutions
Project focus
-

AI Compliance and Regulation
The framework will increase the humanitarian sector's understanding of AI regulatory standards and compliance benefits. We will create clear guidelines for humanitarian contexts, which align with global best practices.
-

AI Technological Assurance
Providing practical tools and approaches to evaluate, validate and improve AI systems. Our framework will help humanitarian organisations assess AI models for fairness, reliability and trustworthiness—building confidence for responsible AI adoption across the sector.
-

Community Participation
The framework will provide a participatory AI playbook with methods and guidance for meaningful engagement with crisis-affected communities. We want to make sure communities are in the loop and have a voice in how AI technologies are implemented in humanitarian contexts.
-

Humanitarian Engagement
Humanitarian organisations will be directly involved in the creation, testing and refinement of the SAFE AI framework. This collaborative approach ensures our solutions address real-world challenges faced by frontline responders.
Meet the Team
-

Anjali Mazumder
Research Director, AI, Accountability, Inclusion & Rights, The Alan Turing Institute
-

Helen McElhinney
Founding architect, SAFE AI
& Executive Director, CDAC Network -

Michael Tjalve
Founder, Humanitarian AI Advisory
Insights
Get in touch.
We want to hear your ideas - if you want to be involved, have a project or research we should know about, feedback or tips, please either contact us via the online form or at info@cdacnetwork.org.
This project has been funded by UK International Development from the UK government.