AI Accountability in Policing and Security (AIPAS) – Whose accountability is it anyway?

Prof P. Saskia Bayerl, CENTRIC

AIPAS’s main objective is to develop practical mechanisms for police forces to assess and ensure their AI deployments are accountable – and can be held to account – with the aim to maintain public trust in AI-based policing and security measures. As the saying goes: “policing is accountability” (Markham & Punch, 2007). The tricky part (probably familiar to anyone who has attempted to implement accountability) is the question ‘accountable for what and to whom’?

Throughout our conversations in AIPAS, we found that police explicitly and consistently understand the public as one of their core stakeholders for accountability; except that the public is not a uniform entity. Public trust in AI and public trust in policing is fragmented and varied, which means that disparate publics also have different needs when it comes to accountability: what they need to know, which information they trust, how it is communicated and by whom, how they can and want to be involved in the accountability process. In designing practical mechanisms, this diversity amongst legitimate perspectives, needs and expectations towards accountable AI needs to become part of the guidance. This is a complex process, which requires frequent touchpoints and validations, and more diverse conversations than we probably had originally envisioned.

Our consideration of Equality, Diversity and Inclusion practice for AIPAS have motivated us to anchor diversity into all our activities from research (e.g., 40% of participants were women, compared to the 35% currently in UK police forces) to our outcomes (e.g., ensuring that tools are accessible also to colourblind users).

And while our project aims to improve accountability in policing, we have also come to acknowledge the degree of accountability needed from our side – in particular the way we engage with our main stakeholders, i.e., police as research partners and the public. We have therefore installed frequent ‘coord’ meetings between academic and policing partners and started blogs and reflections by project members to share our thoughts and understandings of AI accountability. At the same time, sharing specifics can be challenging as many of the conversations and findings are security sensitive and thus restricted. Finding the right balance between transparency and safeguarding operations is challenging and fraught with ethical considerations (touching again on the opening question of ‘accountable for what and to whom’).

Setting ourselves accountability goals, and implementing them, turned out to be as complex as developing AI Accountability mechanisms, and it certainly gave us new respect for what we ask from police in terms of AI accountability.

Reference: Markham, G. & Punch, M. (2007). Embracing Accountability: The Way Forward – Part One, Policing: A Journal of Policy and Practice, 1(3), 300-308, https://doi.org/10.1093/police/pam049

More information: https://aipas.co.uk/