SAFE AI Project

AI has the ability to accelerate safe innovation and change humanitarian action for the better. Yet, there is a real risk of an underfunded, overstretched humanitarian sector accelerating towards unsafe use of AI to reduce costs with serious unintended consequences for vulnerable populations.  

In response to this rapidly evolving landscape, CDAC Network has partnered with The Alan Turing Institute and Humanitarian AI Advisory to launch the SAFE AI project: Standards and Assurance Framework for Ethical Artificial Intelligence. This timely initiative, funded by the UK Foreign, Commonwealth & Development Office (FCDO), will develop a foundational framework for enabling responsible AI in humanitarian action.

This initiative will create the first practical and useable SAFE AI framework for the humanitarian sector, and tackle four key areas to make AI work responsibly in humanitarian settings. We're creating practical AI governance guidelines, developing AI Assurance tools to check if AI systems are fair and trustworthy, ensuring affected communities can participate and have a real say in how AI is used, and engaging with humanitarian organisations to build solutions that address their actual needs. Our goal? To help the sector use AI effectively while keeping people safe.

Join us! The SAFE AI project wants your ideas - if you want to be involved, have a project or research we should know about, feedback or tips, please contact info@cdacnetwork.org or via the link below. You can submit anonymously via this form.

Meet the Team

  • Sarah Spencer

    SAFE AI Lead

  • Michael Tjalve

    Founder, Humanitarian AI Advisory

  • Anjali Mazumder

    AI and Justice & Human Rights Lead, The Alan Turing Institute

  • Helen McElhinney

    Executive Director, CDAC Network

Project focus

  • AI Governance

    Enhancing the humanitarian sector's understanding of AI governance frameworks, regulatory standards, and compliance benefits. We're creating clear guidelines that apply specifically to humanitarian contexts and align with global best practices.

  • AI Assurance

    Providing practical tools and approaches to evaluate, validate, and improve AI systems. Our work helps humanitarian organisations assess AI models for fairness, reliability, and trustworthiness—building confidence for responsible AI adoption across the sector.

  • Community Participation

    Developing a participatory AI playbook with methods and guidance for meaningful community engagement. We want to make sure crisis-affected communities have a voice in how AI technologies are implemented in humanitarian contexts.

  • Humanitarian Engagement

    Directly involving humanitarian organisations in the creation, testing, and refinement of the SAFE AI framework. This collaborative approach ensures our solutions address real-world challenges faced by frontline responders.

Get in touch.

Participation with all actors and people involved at all levels of the humanitarian system is an important component of the SAFE AI project. We encourage people to get in touch and give us feedback, ideas and tips. Along the way we will provide opportunities for people to get more involved in the SAFE AI project - but feel free to submit your feedback via our form or at info@cdacnetwork.org. Your submissions can be anonymous with this form.