Why we urgently need a humanitarian manifesto for AI

Humanitarian deployment of artificial intelligence (AI) is becoming increasingly mainstream. If we get it right, AI offers huge opportunities for more efficient and effective delivery of assistance, as well as the potential to level the playing field and empower community-led action.

However, there are legitimate concerns that the sector has rushed to embrace AI without considering the spectrum of possible impacts on crisis-affected communities – whose voices have been notably shut out from discussions on AI design and governance.

As part of The Alan Turing Institute’s AI UK Fringe in March 2024, CDAC Network hosted an event at London’s Frontline Club to ask: ‘Do we need a humanitarian manifesto for AI?’

A short film from FilmAid Kenya and CDAC Network, shot in Kakuma refugee camp, northern Kenya and screened at the event, enabled people to share their views on what they would like AI to do for them. This first attempt to explore community consultation with refugees on AI uses and governance highlighted the engagement appetite and demand.

The keynote speech was delivered by Abeba Birhane, an Ethiopian-born cognitive scientist featured in the TIME100 Most Influential People in AI. Currently a Senior Advisor in AI Accountability at Mozilla Foundation and an Adjunct Assistant Professor at the School of Computer Science and Statistics at Trinity College Dublin, Birhane also serves on the United Nations Secretary-General’s AI Advisory Body and the newly convened AI Advisory Council in Ireland.

In a speech that explored the cultural biases, power imbalances and ethical risks embedded in the current design and deployment of AI, Birhane concluded that, for an AI solution to genuinely serve social good, it ‘must be built, controlled, and owned by indigenous peoples, to serve the needs of indigenous peoples, in a manner that is informed by and grounded in indigenous epistemologies, experiences and needs’.

The panel, chaired by CDAC’s Executive Director, Helen McElhinney, coalesced around the imperative to seize this moment of global policy debate to ‘build guardrails’ into humanitarian use of AI taking direction from our core commitments of humanity, impartiality, neutrality and independence. If not, we risk many of the world’s most vulnerable people being exploited as subjects in an unregulated experiment.

‘Stop asking what AI can do, and ask what AI should do,’ said Sarah Spencer (independent expert on AI for good). ‘The question I get all the time from humanitarian actors is, “Can AI help me do this?” The real question is, “Should AI be in this place?” And the way to figure out whether AI should be used is to do your governance policy first, not after the pilot project.’

Related is protection of and transparency regarding the way that affected people’s data is being collected and used by algorithmic models. Foni Joyce Vuni (Lead Researcher, Refugee-Led Research Hub Nairobi) noted the power imbalances inherent in her experiences as a refugee: ‘If I don't give my data, I will not access services. But there’s no accountability back to me around how this information will be used.’

Strengthening and supporting independent and local journalism, as well as bolstering AI literacy within the media, will be critical. As Billy Perrigo (Tech Correspondent, TIME magazine) noted, journalism is ‘one of the few mechanisms we have’ to hold both tech companies and the humanitarian system to account.

Fundamentally, the panel agreed that crisis-affected communities must be involved in design and governance to ensure AI solutions are humane, culturally appropriate and disrupt, rather than entrench, existing power dynamics.

‘There is an opportunity also for tech companies to go to the communities, collaborate with local actors, in order to be mindful of the dignity of these people who are already disproportionately affected by major crises,’ said Stella Suge (Executive Director, FilmAid Kenya). ‘I hope this discussion is a move towards working with local actors, local communities, and bringing the fundamental questions to that level.’

Watch the full playlist of videos from CDAC Network at the AI UK Fringe


Previous
Previous

Is truth the first casualty? The role of communication in crisis and humanitarian action

Next
Next

Do we need a humanitarian manifesto for AI? Join CDAC Network at the Alan Turing Institute’s AI UK Fringe