RAI UK Awards £200,000 Collaboration Grants to support the UK’s Mission for Advanced Research in Responsible AI

New funding supports collaborations spanning AI assurance, engineering, skills development, and real-world deployment across healthcare, education, justice, and sustainability.

[26 January 2026] – Responsible AI UK (RAI UK) has funded 11 new research projects through its Collaboration Grant, investing close to £200,000 to support collaborative research that helps ensure artificial intelligence is deployed safely, fairly, and effectively, and used in ways that benefit people, communities and the economy. The funded projects bring together universities, NHS Trusts, civil society organisations and industry partners, directly supporting the UK Government’s AI mission to harness benefits of AI, whilst managing its risks.

The grants focus on four priority areas that reflect national needs and the UK Government’s AI mission: AI sustainability, AI deployment costs and benefits, AI skills, and AI engineering. Projects span sectors including healthcare, education, justice, public services, creative and digital industries, geospatial data, and online safety. Projects range from evaluating AI-powered voice technologies in NHS hospitals to teaching schoolchildren about ethical AI use, and from measuring AI’s impact on the gender pay gap to developing privacy-preserving methods for legal AI systems.

“Amidst a rapid proliferation of synthetic image- and video-based content in online ecosystems, our research addresses urgent gaps in understanding AI-driven harms,” said Dr Allysa Czerwinsky from the University of Manchester, whose team is examining extremist visual content and AI-powered moderation solutions. “This project seeks to develop evidence-based recommendations for policymakers and platform developers to mitigate these harms whilst preserving legitimate expression.”

The funding reflects growing recognition that responsible AI development requires collaboration across disciplines and sectors. Prof Sascha Ott at the University of Warwick is working with two NHS Trusts to evaluate ambient voice technologies for clinical documentation. “We are very excited to contribute to evaluations of the responsible adoption of ambient voice technologies in the NHS, which will hopefully lead to great reductions of staff time spent on paperwork,” said Prof Ott.

Several projects focus on building AI literacy across society. Dr Konstantina Martzoukou from Robert Gordon University is partnering with Aberdeen Science Centre to create a sustainable training model for ethical AI education. “We are delighted to have received funding for GenAISiS T.R.A.I.N. – a collaborative project that empowers young people to navigate Generative AI with confidence, creativity, and critical awareness,” Dr Martzoukou explained. “This is a major step toward ensuring every learner has the skills to thrive in an ethical and inclusive AI-shaped world.”

Speaking on the announcement, Prof Gopal Ramchurn, RAI UK CEO said, “This collaboration grant is about building national capacity. By supporting short, focused projects that bring researchers together with practitioners and policymakers, we are helping ensure the UK’s AI ambitions are matched by strong evidence, practical tools and real-world understanding. ”

Prof Bashar Nuseibeh, Chair RAI UK Research Pillar, said “These 11 projects demonstrate how the UK research community is building complementary collaboration partnerships within the UK and across the globe. The collaborations will address some key responsible AI research challenges, recognising that these challenges will require multi-disciplinary expertise that is geographically distributed. From protecting citizens from AI-driven harms to ensuring our NHS can adopt new technologies safely, and from teaching the next generation to think critically about AI to measuring its real-world impact on equality, these projects tackle the questions that matter most to communities, public services, and our national competitiveness.”

The international dimension of the work is equally significant. Prof Carsten Maple’s team at University of Warwick is collaborating with partners including Singapore’s AI Safety Institute, Nanyang Technological University, and Carnegie Mellon University to develop frameworks for national-level AI governance. Meanwhile, Denis Newman-Griffis at the University of Sheffield is working to identify what truly matters when measuring responsible AI in public services.

“To put Responsible AI into practice, we need to know when we’re doing it right,” said Denis Newman-Griffis. “Current approaches to measuring and evaluating AI systems often miss the mark on telling us about real-world impact and measuring what really matters for users and people affected by AI.”

The importance of responsible geospatial AI is highlighted by Prof Jadu Dash at the University of Southampton, who is working with Ordnance Survey to develop frameworks for ethical use of location-based AI systems. “Geospatial data and AI are transforming decision-making from local to national levels,” said Professor Dash. “This project brings the geospatial and AI communities together to build a responsible community of practice to shape the future of global geospatial AI governance and innovation.” His team’s work addresses the rapid advancement of geospatial AI through foundation models and satellite imagery, which enable everything from land-use mapping to disaster response, whilst ensuring these powerful tools are deployed with appropriate safeguards for privacy and fairness.

The projects bring together expertise with collaborations spanning institutions from the UK and international partners in Australia, Canada, Estonia, Finland, Italy, Japan, Singapore and the United States. Some of the key industry partners include Ordnance Survey, Nokia Bell Labs, and organisations working across the justice, technology, and third sectors.

Prof Miriam Fernandez from The Open University, whose project examines AI’s impact on the gender pay gap, emphasised the importance of addressing structural inequalities. “Pay inequality remains one of the most persistent forms of gender discrimination, shaping women’s lifetime earnings and economic security,” she said. “As AI transforms organisational practices, there is a real risk that existing disparities may be unintentionally amplified. It is therefore vital that AI is developed and deployed in ways that reduce, rather than exacerbate, this longstanding problem.”

The funded projects align closely with the Government’s AI Opportunities Action Plan, and support the Science Secretary’s commitment to making UK a science and technology superpower through responsible innovation.

For more information about the RAI Collaboration Grant and the funded projects, visit: https://rai.ac.uk/all-projects/

ENDS

Notes to Editors

About RAI UK

With a £35 million UKRI investment, RAi UK is a program dedicated to delivering interdisciplinary research and fostering an international ecosystem for Responsible AI research and innovation. Through extensive consultations across the UK, RAi UK has identified emerging challenges in responsible AI and has deployed over £17 million into projects aimed at accelerating the adoption of responsible AI practices and technologies. RAi UK brings research-based expertise that is connective, adaptive, and world-leading through field-building, and engagement with communities, publics, industries, and governments. RAi UK is led from the University of Southampton and backed by UK Research and Innovation (UKRI), through the UKRI Technology Missions Fund and EPSRC. For more information or to get involved, please visit rai.ac.uk.

Media Contact:
Kavita Mittal
Head of Comms – Responsible AI UK
info@rai.ac.uk