By Damian Eke and Gisela Reyes Cruz
The Artificial intelligence (AI)’s design, development, deployment, ethics and governance ecosystems are currently dominated by companies, experts and perspectives from the Global North, often reflecting narratives, needs, expectations, values, priorities, contexts and principles that are not generalisable to the Global South.
Evidence from literature shows that current AI systems have colonial tendencies that have been conceptualised as ‘data colonialism’ (Couldry and Mejias, 2019) ‘algorithmic colonialism’ (Birhane, 2023) or ‘AI colonialism’ (Hao et al., 2024). This has led to concerns about the perpetuation of colonial legacies and the marginalisation of voices, values and needs from the Global South. To mitigate this concern, many have suggested decolonisation as a critical principle for AI design, development and deployment. This was the theme of a RAi UK workshop that was held on the 9th May 2024 at the University of Nottingham. This workshop explored the what, why and how of decolonisation in AI. Elements of ‘coloniality’ are inherent in AI: the idea that the modern-day power imbalances between races, countries, rich and poor, and other groups are extensions of the power imbalances between coloniser and colonised (Hao, 2020; Adams, 2021).
Colonialism was deeply characterised by exploitation, economic control, and cultural imposition. These elements have been identified in AI design (e.g data and labour exploitations), deployment (transfer of AI technologies devoid of the people’s data and contexts), ethics and governance (the negligence of ethical principles, values and voices). This has played out in the recent rush to form global AI ethics guidelines where developing countries in Africa, Latin America, and Central Asia are often left out of the discussions, which has led some to refuse to participate in international data flow agreements (Taliyan, 2019).
The implication of this is that developed countries continue to disproportionately benefit from global norms shaped for their advantage, while developing countries continue to fall further behind. It is imperative to take into consideration these global relations and dynamics, as AI infrastructures and the labour force behind it are already spread across the world (Crawford, 2021).
The workshop started with a lecture, on the meaning and history of Decolonisation, delivered by Patrice Haynes, Assistant Professor in Philosophy at the University of Nottingham. Patrice emphasised that decolonisation is the process of reclaiming identity, agency, and sovereignty rather than just the historical removal of colonial rule. Patrice presented examples from the educational and cultural heritage contexts in which the people involved began questioning and acknowledging the colonial legacies embedded within, and continue to call for broadening the perspectives to include views outside Europe and the Global North, rather than attempting to remove them, which may not be desired and/or possible to meaningfully achieve. Decolonisation is more concerned with having a deeper appreciation of our historical social-context. The lecture was followed up by a breakout session (with two groups) exploring the what- meaning; the why – rationale and the how – practical approaches of decolonising AI. Some of the insights from these breakout sessions are detailed below.
On the what of decoloniality of AI, participants highlighted that decolonising AI means recognising the colonial tendencies that exist in the AI lifecycle – the imperialist ideals and dismissal of non-western values. Some were of the opinion that because technologies are not neutral, decolonising AI comprises efforts to ensure that all relevant values from societies are embedded in AI systems developed for such societies. It is about understanding diverse perceptions of AI, respecting cultural views and making sure that the AI lifecycle is free of exploitation of resources, the people and the environment – elements that characterise coloniality. It is about ensuring ownership of data, natural resources and algorithms generated from the Global South and that shape AI systems. Particularly, the participants emphasised that decolonising AI is not a passive exercise but an active process of challenging coloniality in the design and use of AI and reimagining AI in a way to ensure equity.
Discussing why decolonising AI is of critical importance, the participants listed a number of reasons. Some of the reasons include; to address inherent power-imbalance in AI, to ensure equitable distribution of benefits in such a way to achieve globally fair AI, to prevent the Global North dictating/controlling the narrative for AI. ‘Perspectives, needs, contexts, values and principles beyond the Global North matter’ was a consensus view of the participants. Simply put, the participants believe that decolonising AI to achieve is the right thing to do for global justice and therefore a necessity. Decolonising is a process that leaves room for different voices to be heard – not to achieve a universal perspective but a fair pluriversal approach to AI.
But the critical question is, how can this be achieved? Both groups of participants highlighted that collaboration between critical stakeholders (including policy makers from both the Global North and Global South; designers and other industry actors, academia and citizens) is necessary. This collaboration can help set out requirements for decolonising AI considering different contexts as well as assessment mechanisms to ensure decoloniality requirements are met. It will also include a clear stakeholder mapping as well as giving agency to key stakeholders. All stakeholders need to embrace a decolonising process (that includes respecting different perspectives). Participants also believe that decolonising AI requires the co-creation of a toolkit for decolonising AI (for training and education at different stages), development of governance mechanisms for implementing requirements. There should be a focus on values in the Global South for AI design, development, deployment, ethics and governance.

This workshop was organised by RAi UK researchers Damian Eke and Gisela Reyes Cruz.
Participants included (in alphabetical order): Pearl Agyakwa, Yang Bong, Favour Borokini, Andriana Boudouraki, Pat Brundell, Harriet Cameron, Catarina Gomes da Costa, Daniyar Sapargaliyev.
REFERENCES