
We are submitting this response to the Department for Science, Innovation and Technology (DSIT) open call for evidence regarding the AI Growth Lab on behalf of Responsible AI UK (RAi UK), an open and multidisciplinary network that brings together experts from across the four nations of the UK to understand how we should shape the development of AI to benefit people, communities and society. To arrive at this response, we sent out a call to Principal Investigators and Co-Investigators of all RAi UK-funded projects, inviting them to contribute to this consultation. What follows is a synthesis of the responses we received from our research community.
Executive Summary
Overall, the RAi UK research community welcomes the government’s efforts to create spaces that balance innovation with appropriate guardrails, ensuring responsible AI development and deployment. When properly designed and implemented, regulatory sandboxes can be a helpful mechanism for addressing regulatory barriers and enhancing regulators’ learning. The comments herein aim to inform the government’s endeavours to establish a cross-economy sandbox, providing recommendations regarding its design so that the whole of the UK society can enjoy the benefits of this legal experimentation, while being attentive to and mitigating potential risks.
This response was submitted to the Expert Mechanism on the Right to Development (EMRTD) United Nations Human Rights Office of the High Commissioner (OHCHR) inquiry about Artificial Intelligence, Cultural Rights, and the Right to Development, on behalf of the CulturAI project. We are a team of researchers funded by Responsible AI UK (RAI UK), working on Human-Computer Interaction (HCI) and Responsible Artificial Intelligence (AI) topics, and based at the University of Nottingham, University of Southampton, and King’s College London. CulturAI is a 12-month project, running from February 2025 to February 2026. We have been investigating the evaluation of cultural sensitivity and representation in Text-to-Image (T2I) models. We welcome this consultation and aim to inform the EMRTD study.
The response is based on our research findings from a literature review on how culture has been conceptualised in Generative AI (Gen AI) research, and co-creation workshops with 59 individuals from diverse cultural backgrounds in 19 countries, in which we co-created a mixed-methods methodology for evaluating cultural representation in T2I models. This response also draws on our own experiences and expertise.
https://doi.org/10.25878/7nmj-0t79
This is a submission, on behalf of RAi UK and the Citizen-Centric AI Systems (CCAIS) project, in response to the Joint Committee on Human Rights’ inquiry on ‘Human Rights and the Regulation of Artificial Intelligence’ published in July 2025.
The submission addresses human rights issues, existing and possible changes to legal and regulatory frameworks.
Oral evidence, based on this response, was presented by Dr Sarah Kiden, to the Human Rights (Joint Committee) on 29 October 2025.
This is a submission in response to the Joint Committee on Human Rights’ inquiry on ‘Human Rights and the Regulation of Artificial Intelligence’ published in July 2025. The submission has been prepared by a team of academic researchers with extensive expertise in the ethical, legal, and technical implications of AI use, directly relating to human rights. Our work is anchored in the UKRI Responsible AI (RAi) UK Keystone Project, “PROBabLE Futures”.
We submitted this response to the Intellectual Property Office (IPO), the Department for Science, Innovation and Technology (DSIT) and the Department for Culture, Media and Sport (DCMS) joint consultation on Copyright and Artificial Intelligence on behalf of Responsible AI UK (RAi UK), an open and multidisciplinary network that brings together researchers from across the four nations of the UK to understand how we should shape the development of AI to benefit people, communities and society.
This is a submission in response to the European Commission’s Public Consultation on highrisk AI systems published on 6 June 2025. The submission has been prepared by a team of academic researchers with extensive expertise in the ethical, technical, legal, safeguarding, and operational applications of data analytics and AI within the criminal justice system. Our work is anchored in the UKRI Responsible AI (RAi) UK Keystone Project, “PROBabLE Futures”, a four-year research initiative focused on evaluating probabilistic AI systems across the criminal justice sector.
We submitted this response to the Department for Science, Innovation and Technology (DSIT) consultation on the new AI Management Essentials (AIME) tool on behalf of Responsible AI UK (RAi UK), an open and multidisciplinary network that brings together researchers from across the four nations of the UK to understand how we should shape the development of AI to benefit people, communities and society.
This is a submission in response to the Ministry of Justice’s Call for Evidence on the ‘Use of Evidence Generated by Software in Criminal Proceedings’ published on 21 January. The submission has been prepared by a team of academic researchers with extensive expertise in the ethical, legal, safeguarding, and operational applications of data analytics and AI within the criminal justice system. Our work is anchored in the UKRI Responsible AI (RAi) UK Keystone Project, “PROBabLE Futures”, a four-year research initiative focused on evaluating probabilistic AI systems across the criminal justice sector.
This submission, on behalf of RAi UK, was made in response to an NHS Consultation on integrated pathways.
This collaborative response was made on behalf of the RAi UK funded, TRAIS project, and was prepared by researchers based at King’s College London, UK and CeRAi associated researchers based at the following institutions in India.
Building a shared scientific understanding in a fast-moving field
It is the work of 100 international AI experts who collaborated in an unprecedented effort to establish an internationally shared scientific understanding of risks from advanced AI and methods for managing them.
This response to Sir Brian Leveson’s Independent Review of the Criminal Courts is submitted by a team of academic researchers with extensive expertise in the ethical, legal, safeguarding, and operational applications of data analytics and AI within the criminal justice system. Our work is anchored in the RAI Keystone Project, “PROBabLE Futures”, a four-year research initiative focused on evaluating probabilistic AI systems across the criminal justice sector. The project aims to develop a responsible, practical, and ‘operational-ready’ framework in collaboration with multiple criminal justice partners.
This response is submitted by academic researchers with extensive experience of theory and practice of real-world ethical, legal, safeguarding and operational approaches to data analytics and AI in criminal justice. Our RAI Keystone Project: “Probable Futures”, is a four-year research project reviewing Probabilistic AI systems across the criminal justice system to create a responsible and ‘operational-ready’ framework, working with multiple criminal justice partners.
Our response to UN’s consultation on Interim Report: Governing AI for Humanity. The report calls for a closer alignment between international norms and how AI is developed and rolled out.
This response is submitted by academic researchers with extensive experience of theory and practice of real-world ethical, legal, safeguarding and operational approaches to data analytics and AI in policing. Our RAI Keystone Project: “Probable Futures”, is a four-year research project reviewing Probabilistic AI systems across law enforcement to create a responsible and ‘operational-ready’ framework, working with the NPCC, police forces and multiple partners.
Response on behalf of ‘PROBabLE Futures: Probabilistic AI Systems in Law Enforcement Futures’Responsible AI UK Keystone Project and the Centre for Emerging Technology and Security, The Alan Turing Institute.