by Jennifer Williams (project lead), University of Southampton

Project workshop held on 11th June 2025, London.
Responsible research innovation is the foundation of AI safety – looking across the entire lifecycle of an AI system and the people who interface with it. The breadth of AI safety spans inception, design, and organisational aspects, testing and evaluation, and deployment with end-users.
Our RAI UK International Partnership project: AI Regulation Assurance for Safety-Critical Systems develops a multinational lens from trusted research partners from the United Kingdom, Australia, and the United States to examine AI safety in terms of regulation. In the maritime sector, for example, there are strong regulations regarding how vessels operate in international waters. These regulations must still hold even when fully autonomous vessels of the future are operating in the world’s oceans.
One of our tasks in this project is to understand the gap between human-only performance and human-AI teaming, to uphold existing maritime regulations and standards as well as identifying where such regulations may fall short when AI is involved. The impact is that AI is held to a high standard, but the goal is not necessarily human-like performance. Rather, we seek a deeper understanding of human-AI command and control.
Naturally, some tasks, such as processing vast amounts of data, are very well-suited for AI systems. At the same time, certain safety-critical decisions are well-suited for human judgement. Defining this delicate balance, while making the most of AI, is not easy. To do so requires multiple stakeholder engagement. That is why in this project we involve multiple disciplinary lenses, multiple national lenses, and input from academia, industry, regulators, government, and others. We involve all career stages in our workshops, including students and senior leaders. Through a series of workshops in each of the three AUKUS nations, we capture the diversity of each national conversation about AI regulation, identify points of convergence, and identify open technical problems that require additional research and innovation.
The main challenges we encountered applying an RRI and EDI lens to our project – from the onset and throughout – have been finding ways to involve people in the project from outside of our national workshops, drawing from our extended networks, through one-to-one interviews and institutional visits in all three nations. The benefits include candid and open viewpoints, hearing from PhD students and undergraduates, and mitigating any perceived conflicts of interest between stakeholders. With the maritime sector specifically, we have found that technology developers, especially from academia, are often unaware of how their foundational research may end up later deployed in the real world, including the nuances of such complex real-world environments. Our use of the AREA framework helps nurture these conversations to get academics more engaged with high-stakes problem spaces that they can use to increase the impact of their research.