Reports

Reports

Call for Evidence on the AI Growth Lab

RAi UK Consultation Response to DSIT’s Call for Evidence on the AI Growth Lab (2026)

We are submitting this response to the Department for Science, Innovation and Technology (DSIT) open call for evidence regarding the AI Growth Lab on behalf of Responsible AI UK (RAi UK), an open and multidisciplinary network that brings together experts from across the four nations of the UK to understand how we should shape the development of AI to benefit people, communities and society. To arrive at this response, we sent out a call to Principal Investigators and Co-Investigators of all RAi UK-funded projects, inviting them to contribute to this consultation. What follows is a synthesis of the responses we received from our research community.

Executive Summary

Overall, the RAi UK research community welcomes the government’s efforts to create spaces that balance innovation with appropriate guardrails, ensuring responsible AI development and deployment. When properly designed and implemented, regulatory sandboxes can be a helpful mechanism for addressing regulatory barriers and enhancing regulators’ learning. The comments herein aim to inform the government’s endeavours to establish a cross-economy sandbox, providing recommendations regarding its design so that the whole of the UK society can enjoy the benefits of this legal experimentation, while being attentive to and mitigating potential risks.

Professor SD Ramchurn, Professor G Neff, Dr I Parisio, Dr S Kiden, Dr Athina Georgara, Professor S Stumpf, Dr.M. Wong, Dr SF Shahandashti,and Dr M Aristodemou

RAi UK Consultation Response to DSIT’s Call for Evidence on...

Download PDF

“Artificial Intelligence, Cultural Rights, and the Right to Development”

RAi Response to Expert Mechanism on the Right to Development Study (2025)

This response was submitted to the Expert Mechanism on the Right to Development (EMRTD) United Nations Human Rights Office of the High Commissioner (OHCHR) inquiry about Artificial Intelligence, Cultural Rights, and the Right to Development, on behalf of the CulturAI project. We are a team of researchers funded by Responsible AI UK (RAI UK), working on Human-Computer Interaction (HCI) and Responsible Artificial Intelligence (AI) topics, and based at the University of Nottingham, University of Southampton, and King’s College London. CulturAI is a 12-month project, running from February 2025 to February 2026. We have been investigating the evaluation of cultural sensitivity and representation in Text-to-Image (T2I) models. We welcome this consultation and aim to inform the EMRTD study.

The response is based on our research findings from a literature review on how culture has been conceptualised in Generative AI (Gen AI) research, and co-creation workshops with 59 individuals from diverse cultural backgrounds in 19 countries, in which we co-created a mixed-methods methodology for evaluating cultural representation in T2I models. This response also draws on our own experiences and expertise.

https://doi.org/10.25878/7nmj-0t79

Reyez-Cruz, G., Kiden, S., Azim, T., Bergin, A. G., Choi, S., Eke, D., Klyshbekova, M., Peter, O., Wa- heed, M., Devlin, K., Fischer, J. E., Ramchurn, S., Stein, S and Vallejos, E. P.
https://doi.org/10.25878/7nmj-0t79

RAi Response to Expert Mechanism on the Right to Development...

Download PDF

Human Rights and the Regulation of AI

RAi UK Submission in response to the Call for Evidence on ‘Human Rights and the Regulation of AI’ (2025)

This is a submission, on behalf of RAi UK and the Citizen-Centric AI Systems (CCAIS) project, in response to the Joint Committee on Human Rights’ inquiry on ‘Human Rights and the Regulation of Artificial Intelligence’ published in July 2025.

The submission addresses human rights issues, existing and possible changes to legal and regulatory frameworks.

Oral evidence, based on this response, was presented by Dr Sarah Kiden, to the  Human Rights (Joint Committee) on 29 October 2025.

Dr S Kiden, Dr V Yazdanpanah, Dr R Mestre, Dr Ingi Iusmen, Dr J Atkinson, Dr I Parisio, Prof S D Ramchurn and Prof S Stein

RAi UK Submission in response to the Call for Evidence...

Download PDF

‘Human Rights and the Regulation of AI’

PROBabLE Futures Submission in response to the Call for Evidence on ‘Human Rights and the Regulation of AI’ (2025)

This is a submission in response to the Joint Committee on Human Rights’ inquiry on ‘Human Rights and the Regulation of Artificial Intelligence’ published in July 2025. The submission has been prepared by a team of academic researchers with extensive expertise in the ethical, legal, and technical implications of AI use, directly relating to human rights. Our work is anchored in the UKRI Responsible AI (RAi) UK Keystone Project, “PROBabLE Futures”.

Dr M Jorgensen, Dr A Paul and Professor M Oswald

PROBabLE Futures Submission in response to the Call for Evidence...

Download PDF

'Copyright and Artificial Intelligence'

RAi UK response to the Public Consultation on Copyright and Artificial Intelligence

We submitted this response to the Intellectual Property Office (IPO), the Department for Science, Innovation and Technology (DSIT) and the Department for Culture, Media and Sport (DCMS) joint consultation on Copyright and Artificial Intelligence on behalf of Responsible AI UK (RAi UK), an open and multidisciplinary network that brings together researchers from across the four nations of the UK to understand how we should shape the development of AI to benefit people, communities and society.

Prof. SD Ramchurn, Prof. G. Neff, Prof. K. Devlin, Dr. I. Parisio, Prof. M. Oswald, Dr. T. Lawal, Dr. F. Ryan, Prof. B. Stahl, Dr. D. Eke, Dr. P. Ochang, Dr. H. Cameron, Dr. M. Voigts and Dr. C.Brinck

RAi UK response to the Public Consultation on Copyright and...

Download PDF

'High-Risk AI Systems

PROBabLE Futures Submission in response to the European Commission’s Public Consultation on High-Risk AI Systems (2025)

This is a submission in response to the European Commission’s Public Consultation on highrisk AI systems published on 6 June 2025. The submission has been prepared by a team of academic researchers with extensive expertise in the ethical, technical, legal, safeguarding, and operational applications of data analytics and AI within the criminal justice system. Our work is anchored in the UKRI Responsible AI (RAi) UK Keystone Project, “PROBabLE Futures”, a four-year research initiative focused on evaluating probabilistic AI systems across the criminal justice sector.

Dr. T. Lawal (Northumbria), Dr. E. Taka (Glasgow), Dr. K. N. Kotsoglou (Northumbria) and Dr. M. Sevegnani (Glasgow)

PROBabLE Futures Submission in response to the European Commission’s Public...

Download PDF

AI Management Essentials Tool

RAi UK response to the DSIT Public Consultation on the AI Management Essentials Tool

We submitted this response to the Department for Science, Innovation and Technology (DSIT) consultation on the new AI Management Essentials (AIME) tool on behalf of Responsible AI UK (RAi UK), an open and multidisciplinary network that brings together researchers from across the four nations of the UK to understand how we should shape the development of AI to benefit people, communities and society.

Dr Isabela Parisio; Dr Sarah Kiden; Professor Gina Neff; Professor Sarvapali (Gopal) Ramchurn

RAi UK response to the DSIT Public Consultation on the...

Download PDF

Use of Evidence Generated by Software in Criminal Proceedings

PROBabLE Futures Submission in response to Call for Evidence on the Use of Evidence Generated by Software in Criminal Proceedings (2025)

This is a submission in response to the Ministry of Justice’s Call for Evidence on the ‘Use of Evidence Generated by Software in Criminal Proceedings’ published on 21 January. The submission has been prepared by a team of academic researchers with extensive expertise in the ethical, legal, safeguarding, and operational applications of data analytics and AI within the criminal justice system. Our work is anchored in the UKRI Responsible AI (RAi) UK Keystone Project, “PROBabLE Futures”, a four-year research initiative focused on evaluating probabilistic AI systems across the criminal justice sector.

Dr. T. Lawal, Dr. A. Paul, Dr. K. N. Kotsoglou, Professor M. Oswald MBE (Northumbria) and Professor C. McCartney (Leicester)

PROBabLE Futures Submission in response to Call for Evidence on...

Download PDF

Building an integrated, rules-based medtech pathway

RAi UK response to the NHS Consultation on "Building an integrated, rules-based medtech pathway"

This submission, on behalf of RAi UK, was made in response to an NHS Consultation on integrated pathways.

Dr A.Bergin

RAi UK response to the NHS Consultation on "Building an...

Download PDF

AI in Banking and Financial Services

Techno-Regulatory AI Sandbox (TRAIS) response to the UK Parliament’s Treasury Committee Inquiry about AI in Banking and Financial Services

This collaborative response was made on behalf of the RAi UK funded, TRAIS project, and was prepared by researchers based at King’s College London, UK and CeRAi associated researchers based at the following institutions in India.

  • Indian Institute of Technology, Madras
  • Indian Institute of Technology, Bombay
  • National University of Juridical Sciences.

 

Dr I. Parisio, Dr K. Pillutla, Dr. D. Olukan, Dr. A. Bhagoji, Dr. A.Gupta, Dr. S.K. Guha

Techno-Regulatory AI Sandbox (TRAIS) response to the UK Parliament’s Treasury...

Download PDF

The International Scientific Report on the Safety of Advanced AI

International AI Safety Report January 2025

Building a shared scientific understanding in a fast-moving field
It is the work of 100 international AI experts who collaborated in an unprecedented effort to establish an internationally shared scientific understanding of risks from advanced AI and methods for managing them.

CHAIR Prof. Yoshua Bengio
Université de Montréal / Mila - Quebec AI Institute

International AI Safety Report January 2025

Download PDF

Independent Review of Criminal Courts

PROBabLE Futures Consultation Response to Sir Brian Leveson’s Independent Review of the Criminal Courts (2025)

This response to Sir Brian Leveson’s Independent Review of the Criminal Courts is submitted by a team of academic researchers with extensive expertise in the ethical, legal, safeguarding, and operational applications of data analytics and AI within the criminal justice system. Our work is anchored in the RAI Keystone Project, “PROBabLE Futures”, a four-year research initiative focused on evaluating probabilistic AI systems across the criminal justice sector. The project aims to develop a responsible, practical, and ‘operational-ready’ framework in collaboration with multiple criminal justice partners.

Dr. T. Lawal and Professor M. Oswald (Northumbria)

PROBabLE Futures Consultation Response to Sir Brian Leveson’s Independent Review...

Download PDF

Independent Sentencing Review

PROBabLE Futures Response to the Independent Sentencing Review call for evidence (2025)

This response is submitted by academic researchers with extensive experience of theory and practice of real-world ethical, legal, safeguarding and operational approaches to data analytics and AI in criminal justice. Our RAI Keystone Project: “Probable Futures”, is a four-year research project reviewing Probabilistic AI systems across the criminal justice system to create a responsible and ‘operational-ready’ framework, working with multiple criminal justice partners.

Dr. E. Tiarks, Dr. C. Paterson-Young (Aberdeen) and Prof. C. McCartney (Leicester)

PROBabLE Futures Response to the Independent Sentencing Review call for...

Download PDF

A Response to UN Interim Report on Governing AI for Humanity

Responsible AI Governance

Our response to UN’s consultation on Interim Report: Governing AI for Humanity. The report calls for a closer alignment between international norms and how AI is developed and rolled out.

Dr S Kiden, Prof. S Stein and Prof. S D Ramchurn

Responsible AI Governance

Download PDF

Proposed Policing Inspection Programme and Framework

PROBabLE Futures Response to HMICFRS Consultation on Proposed policing inspection programme and framework (2024)

This response is submitted by academic researchers with extensive experience of theory and practice of real-world ethical, legal, safeguarding and operational approaches to data analytics and AI in policing. Our RAI Keystone Project: “Probable Futures”, is a four-year research project reviewing Probabilistic AI systems across law enforcement to create a responsible and ‘operational-ready’ framework, working with the NPCC, police forces and multiple partners.

Prof. C. McCartney (Leicester), Prof. M. Oswald (Northumbria), Dr. C. Paterson-Young, Dr. K. Kotsoglou, and Dr. E. Tiarks (Aberdeen)

PROBabLE Futures Response to HMICFRS Consultation on Proposed policing inspection...

Download PDF

APP Consultation

PROBabLE Futures Response to College of Policing Data Ethics and Data-Driven Technologies APP Consultation (2024)

Response on behalf of ‘PROBabLE Futures: Probabilistic AI Systems in Law Enforcement Futures’Responsible AI UK Keystone Project and the Centre for Emerging Technology and Security, The Alan Turing Institute.

Prof. M. Oswald, Prof. M. Calder, Dr. K. N. Kotsoglou, Dr. M. Maher, Prof. C. McCartney, Dr. K. Montague, Dr. C. Paterson-Young, Dr E. Tiarks and Dr. R. Powell.

PROBabLE Futures Response to College of Policing Data Ethics and...

Download PDF