The digital frontier: Robert Mardini on the information landscape in conflict
How are shifts in the digital landscape shaping modern conflict? What role do private tech companies have to play? And how should humanitarians respond? BBC News presenter Geeta Guru-Murthy put these questions to Robert Mardini, Director-General of the International Committee of the Red Cross (ICRC), at the opening of CDAC Network’s 2023 Public Forum. Listen to their conversation above or on SoundCloud.
Key takeaways
Mis/disinformation isn’t new – but information can now be falsified, spread and weaponised at unprecedented speed and scale
Information operations have always been instrumental in armed conflict, said Mr Mardini, but what is new is ‘how quickly and how easily information today can be created, can be spread and even weaponised and consumed simultaneously on a global scale by states but also by non-state armed actors, private companies and individuals.’
Digital forms of disinformation are leveraging artificial intelligence (AI), such as ‘deep fakes’, making it increasingly difficult for even experts to distinguish fact from falsehood – particularly as the technology advances ‘more quickly than our ability to keep pace’.
This can be ‘a matter of life or death’ in situations of armed conflict, such as the war currently playing out in Gaza, where people must make critical decisions based on information available to them.
‘Let’s not forget that communication and information is a form of aid in itself,’ said Mr Mardini. ‘So, we must draw attention to these risks: it is unlikely that anyone else will.’
Non-state actors – including tech companies and digital platforms – are stakeholders in conflict and have a responsibility to safeguard information integrity
‘Non-state actors, such as tech companies and cyber groups, have clearly emerged as stakeholders in armed conflicts, with obvious incentives and opportunities,’ said Mr Mardini. In contexts such as Ukraine, they are ‘key to the digital infrastructure, and increasingly key to humanitarian system infrastructure, too.’
Humanitarian work must ‘encompass dialogue’ with these actors to ensure that responsibilities and legal limits are observed regarding the spread of harmful information. States have legal obligations to ensure private sector companies respect international humanitarian and human rights law, and digital platforms should ‘promote safety by design, neither facilitating nor encouraging the spread of harmful information’.
Mr Mardini pointed to the ICRC’s work to engage with tech companies to ensure that data protection is integral to system design and that they meet their responsibilities to stem the flow of harmful information ‘precisely in situations of armed conflict, where harmful impact is the greatest on people who are the most vulnerable and the least equipped to mitigate the risks’.
Sector take-up of technology must be guided by the humanitarian principles
Despite the risks digital technology presents, it is important that excessive caution does not prevent humanitarians from ‘leveraging technology to be more impactful in our work in supporting and protecting civilians in the line of fire, be that in the physical world or in cyberspace’.
Strengthened collaboration with non-humanitarian and non-state actors is essential. The ICRC has focused efforts on diplomacy with key players in the digital transformation, ‘aiming to carve out a neutral space for conversations around the responsible use of digital technologies in humanitarian setting’. The ICRC also recently established a delegation for cyberspace to explore the possibilities of free open-source software as a ‘safe and sustainable alternative to commercial solutions’.
Reflecting on the cyber-attack on the ICRC in 2022, Mr Mardini advocated for ‘an attitude of frugality’ towards data collection, in recognition that even sophisticated cybersecurity systems used by governments and bank can be vulnerable, and that new harms are emerging based on the uses of data in conflict. He advised that humanitarian agencies resist the ‘temptation’ to collect more data than is absolutely mission critical, and to consider deeply how to ‘do no harm’ in the digital domain.
Regulation has a role to play but, as Mr Mardini warned, there is an ‘exponentially growing gap’ between the speed at which this technology, boosted by generative AI, is developing and the ability of regulators to keep pace.
Instead, existing ethical frameworks such as the Geneva Conventions should be integrated with, and in some cases expanded to encompass, new dimensions arising from technology. Ultimately, our approach to these advances must be guided by the humanitarian principles of humanity, impartiality, neutrality and independence.
Read more: Robert Mardini’s reflections on information in conflict settings in the age of AI