
SAFE AI Project
AI has the ability to accelerate safe innovation and change humanitarian action for the better. Yet, there is a real risk of an underfunded, overstretched humanitarian sector accelerating towards unsafe use of AI to reduce costs with serious unintended consequences for vulnerable populations.
In response to this rapidly evolving landscape, CDAC Network, The Alan Turing Institute and Humanitarian AI Advisory have partnered to launch the SAFE AI project: Standards and Assurance Framework for Ethical Artificial Intelligence. This iniative, funded by the UK Foreign, Commonwealth & Development Office (FCDO), will develop a practical and useable foundational framework for enabling responsible AI in humanitarian action.
We're creating practical AI compliance and regulation guidelines, developing AI technological assurance tools to check if AI systems are fair and trustworthy, ensuring affected communities can participate and have a real say in how AI is used, and engaging with humanitarian organisations to build solutions that address their actual needs.
Project focus
-
AI Governance
Enhancing the humanitarian sector's understanding of AI governance frameworks, regulatory standards, and compliance benefits. We're creating clear guidelines that apply specifically to humanitarian contexts and align with global best practices.
-
AI Assurance
Providing practical tools and approaches to evaluate, validate, and improve AI systems. Our work helps humanitarian organisations assess AI models for fairness, reliability, and trustworthiness—building confidence for responsible AI adoption across the sector.
-
Community Participation
Developing a participatory AI playbook with methods and guidance for meaningful community engagement. We want to make sure crisis-affected communities have a voice in how AI technologies are implemented in humanitarian contexts.
-
Humanitarian Engagement
Directly involving humanitarian organisations in the creation, testing, and refinement of the SAFE AI framework. This collaborative approach ensures our solutions address real-world challenges faced by frontline responders.
We are in beta phase!
The SAFE AI project is currently in beta phase. Given the emerging and innovative nature of AI in humanitarian aid, we want to put these products to the test in real world, practical situations. If you have any feedback or insights on any of the beta products, email info@cdacnetwork.org or fill in this feedback survey.
We also want to hear your ideas - if you want to be involved, have a project or research we should know about, feedback or tips, please either contact us, or fill in the form via the link below.
SAFE AI products
Addressing power dynamics in participatory AI for crisis-affected communities (beta)
Co-Design vs. User-Centred Design for AI Solutions (beta)
FCDO Roundtable - Building a responsible AI framework
Meet the Team
-
Anjali Mazumder
AI and Justice & Human Rights Lead, The Alan Turing Institute
-
Helen McElhinney
Executive Director, CDAC Network
-
Michael Tjalve
Founder, Humanitarian AI Advisory
-
Sarah Spencer
SAFE AI Lead
Insights
Get in touch.
Participation with all actors and people involved at all levels of the humanitarian system is an important component of the SAFE AI project. We encourage people to get in touch and give us feedback, ideas and tips. Along the way we will provide opportunities for people to get more involved in the SAFE AI project - but feel free to submit your feedback via our form or at info@cdacnetwork.org. Your submissions can be anonymous with this form.
This project has been funded by UK International Development from the UK government.