by Dr Raquel Iniesta (project lead), Kings College of London

From left to right: Prof Payam Barnaghi (ICL), Prof Susan Shelmerdine (UCL), Dr Nenad Tomasev (DeepMind), Dr Ellie Asgari (Guy’s and St Thomas’ NHS Foundation Trust) and Mrs Jennie Wilson Bradley (patient)
Our project -The Human Role to Guarantee an Ethical AI for Healthcare- started with a clear idea: if AI is going to support healthcare meaningfully, ethics must be central and not an afterthought. We found essential to make sure that everyone’s voice could be part of shaping what “ethical AI” looks like in practice. The conversation must include not only technical developers, but also patients, clinicians, and other agents impacted by the technology.
The result is a tool we built to do two things: trigger and guide conversations about ethical AI and build basic AI literacy across stakeholders. It’s designed to bring patients, healthcare professionals, and developers into shared dialogue about the ethical tensions and trade-offs in using AI. It also supports AI literacy, especially for those who haven’t traditionally been part of these discussions and need the basic knowledge on AI to develop critical skills that make them empowered to make informed decisions involving AI.
From the beginning of our tool development, we approached this through the lens of Responsible Research and Innovation (RRI) as a commitment to reflection, openness, and dialogue. RRI shaped how we involved stakeholders and stayed responsive to different perspectives, especially when the “right” answer wasn’t obvious. For this aim, we used the Responsible Innovation Prompts and Practice Cards (RI Cards)1 within our team and with stakeholders to guide our discussions and help structure critical conversations across different groups, including healthcare professionals, patients and developers.
Activities with RI Cards helped us to surface assumptions we hadn’t questioned before, such as who gets to define what counts as “beneficial” AI in healthcare settings. They also offered a common language to communicate with stakeholders about key ethical and social dimensions, like accountability, transparency, and long-term impact, early in the design process. In several cases, they influenced design choices in our tool by prompting us to rethink how users with different backgrounds might interpret risk or trust AI recommendations, leading us to prioritise accessibility and clarity in both language and functionality.
Equity, Diversity, and Inclusion (EDI) was also a key concern for us. Rather than assuming one-size-fits-all solutions, we actively tried to include people from underrepresented groups in conversations. This wasn’t always easy or perfect, but it made a difference. It helped us think critically about who gets heard in AI development and who doesn’t.
We don’t have all the answers, but we believe in the power of inclusive, ethical conversations to build trust and better outcomes in AI healthcare. For us, this is what responsible innovation looks like in real life.