Why community engagement is the smart strategy for AI in humanitarian response
CDAC Network at World Summit AI
At the recent World Summit AI, CDAC’s Executive Director, Helen McElhinney, made the case for deeper community participation in the development and use of artificial intelligence (AI) technologies in humanitarian settings.
Despite the increasing role of AI and digital tools in crisis response, the voices of those most affected by crises are often absent from decision-making spaces. As Helen noted, ‘We often look around and notice who is missing from the room, and it’s clear that the voices of those affected by crises are barely present.’
To bridge this gap, Helen shared insights from a community consultation CDAC and FilmAid Kenya conducted in northern Kenya’s Kakuma refugee camp. Here, young people were eager to discuss how technology impacts their lives, challenging the assumption that these topics are too complex for meaningful community engagement. ‘We figured out how to translate big concepts and trade-offs to make the conversations meaningful and constructive. It’s not easy, but it’s definitely possible,’ Helen explained.
Helen’s remarks were made on a panel titled The double edged sword of AI, where she spoke alongside Katya Klinova (Head of Data and AI Initiatives, UN Secretary-General’s Innovation Lab), Paul Uithol (Director of Humanitarian Data, Humanitarian OpenStreetMap Team) and Paola Yela (Information Management and Data Science Officer, IFRC), moderated by Sarah Spencer (Director, EthicAI).
The key takeaway was that meaningful community engagement around AI is both an ethical imperative and a strategic one. By incorporating local knowledge and lived experience, humanitarian tech can be more accountable, improve data quality, foster community buy-in and allow for quick course correction when needed. ‘If we get community engagement right around technology, it’s not just the right thing to do – it’s the smart thing to do,’ Helen emphasised.
Helen also stressed the importance of ensuring that technology in humanitarian contexts doesn’t worsen existing vulnerabilities, especially in protracted crises where trust in humanitarian action must be sustained over time. The humanitarian principle of ‘do no harm’ should guide all our AI initiatives.
The call to action was clear: it’s imperative that we do the ‘hard work – deliberately and collectively – on community engagement around the take-up of tech before we lose the trust of communities.’ In an era of information disorder and weaponised disinformation, that trust is more fragile than ever, and it is crucial we safeguard it by putting affected people at the heart of every decision in humanitarian response.