Contributing authors in alphabetical order:
Abbott, Y Pamela; Aicardi, Christine; Andrei, Oana; Ball, Brian; Bentley, Caitlin; Blaize Alfred, Lindy-Ann; Brown, Amy Aisha; Cruz Reyes, Gisela; Duarte, Tania; Garasto, Stef; Garforth, James; Klyshbekova, Maira; Naiseh, Mohammad.
Image credits: Jamillah Knowles & Reset.Tech Australia / Better Images of AI / People on phones (portrait) / CC-BY 4.0
The UK recently launched its AI Opportunities Action Plan, in which “Changing lives by embracing AI” is a key focus. The Plan outlines a “Scan > Pilot > Scale” approach for government AI adoption and discusses public-private sector collaboration, but there is little mention of skills and education.
This is worrying when the public is, in contrast, faced with projections like those outlined in a Goldman Sachs report, anticipating that 300 million full-time jobs could be replaced by AI.
How will the UK address major workforce transitions, particularly in preparing existing employees for AI-augmented roles and building broad-based digital literacy across all staff levels?
The plan also overlooks the vital need for hybrid skills that combine domain expertise with AI knowledge – for instance, healthcare professionals who understand both medical practice and AI capabilities.
Without a robust education strategy that addresses these gaps through comprehensive upskilling programs and cross-disciplinary and responsible AI training, we risk adding new barriers to embracing AI, when we don’t support the workforce and our citizens to effectively understand, implement, and work alongside these new technologies safely and responsibly.
Whether you’re directly or indirectly impacted by AI, a user or a developer of AI, understanding not just what AI is but also recognising how AI is designed and developed, the motivations, principles and ethics guiding its creation, and its broader implications for individuals, society, and the environment are critically important. Involving everyone in shaping responsible AI policy and practice is essential, and education is key to facilitating this participation.
There is a growing demand among the UK population to learn more about AI and its implications, with surveys by the Institute of the Future of Work, and the Royal Society showing that a majority of respondents believe AI will have a significant impact on their lives, and would like to know more about its potential implications for society. Many people won’t have time or resources to engage in traditional, vocational, or employer-led training, and will be turning to online resources. Given that there is a growth in the amount of online learning happening, many of us will be finding out about RAI online.
People’s time is valuable, and it’s clear that online training has to provide added value for it to make a difference, so we, as a group of multidisciplinary academics in Responsible AI and Education, provide you with insights to help make informed decisions about suitable learning materials in this area.
Responsible AI (RAI) is not a one-size-fits-all solution. Despite it becoming a common phrase, there is no single, universally accepted definition of it. Different organisations and training providers may have their own interpretations, often focusing on broad concepts like societal implications and mitigating unintended consequences. For example, Microsoft defines RAI in terms of six guiding principles, including “fairness”, stating that “AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways.” However, this definition is quite generic and does not address the nuances of fairness in diverse contexts.
Indeed, the principles and practices of RAI may vary across different cultural, social, and geographical contexts. For instance, research by Sambasivan et al (2020) demonstrates that largely Western-developed fairness frameworks are not directly portable to India due to differences in societal structures, power dynamics, and cultural norms.
When choosing an RAI course, consider whether the training provider acknowledges and addresses the role of context in shaping RAI understanding and implementation. Look for programmes that recognise the limitations of applying Western-centric, generalised frameworks, and that consider diversity at their core.
Relate to the developer perspective, too – developers are accustomed to learning about tech in a decontextualised manner, but RAI is about engaging with stakeholders and anticipating impacts in specific contexts. For developers, RAI education may be a shift from the familiarity of decontextualised AI education. RAI education emphasises equipping AI developers with a wide range of soft, strategic, and technical skills. This requires collaboration with different stakeholders, adapting solutions to specific cultural and social contexts and acknowledging that RAI education is not only for developers. It may be important to consider whether programmes recognise this, and/or make their intended audience explicit.
As you explore RAI online training options, be cautious of potential biases and underlying goals of the training providers, especially when they are private companies with their own interests. Every big player has a course nowadays, and guess what, they’re all free. Some providers may focus on their own areas of expertise, have a limited conceptualisation of RAI, or avoid topics that challenge their business models. For example, a company heavily invested in large language models (LLMs) may not include “understand the circumstances under which LLMs use is not appropriate” as part of their RAI training, as it conflicts with their business interests.
Whilst it may be tempting to turn to AI experts and technology providers for training, it’s crucial to remember that such expertise doesn’t always translate into effective learning design. To ensure your time and effort don’t go to waste, prioritise courses that are factually correct, but that also incorporate proven learning design principles, such as Universal Design for Learning (UDL). Such a provider will adopt a learner-centric approach and will explicitly explain how they expect learning to happen. They’ll also provide multiple ways for you to understand, engage with, and act on what you learn.
Prioritise courses that meet your needs, experiences, and learning goals in contexts that will make sense to you. Seek programmes that engage you in hands-on activities, case studies, and discussions to reinforce your understanding of RAI. For example, The Alan Turing Institute training uses scenarios and role-playing, which can help to apply learning. Also look out for programmes that offer actionable points and practical guidance on implementing RAI principles in real-world scenarios. While any current training programmes, such as those offered by Microsoft and Data Iku, do offer organisational structures and examples, they don’t provide learners with concrete steps to apply the knowledge covered.
We explore findings of our emerging research, highlighting aspects of Responsible AI training and education that can empower you to take control of your future and help shape the impact that AI will have on our lives and society.
As AI technologies are increasingly used in educational content development and delivery, it is important to be aware of their presence in your learning experience. The EU AI act emphasises the importance of transparency in AI use, and this principle should extend to educational settings especially education about the ethics of AI. When enrolling in RAI courses, inquire about the use of AI in the learning approach. For instance, providers should clearly communicate to learners if a course includes AI-generated content or AI-assisted grading and provide opportunities for learners to critically engage with the material. With generative AI known to make mistakes, enquire who or how content was checked or verified by educators.
The Responsible AI ecosystem is being forged as we speak. Existing educational materials play a crucial role in shaping its development, for example by framing what RAI is and who it’s for. As such, we offer the above guidance to help RAI learners make informed decisions about their educational journey, by recognising the context and implications of certain educational choices. Ideally, though, the journey towards RAI use and education should not end at the individual learner, rather be one of collective action and responsibility.
As part of the RAI UK Skills Pillar, we have invited the wider AI ecoysystem to get involved in collaborating on these guidelines.
This article has been written by a group of multidisciplinary academics and civil society organisations taking part in a research project led by Dr Caitlin Bentley, Deputy Chair RAi UK Skills Pillar, and has been granted ethics clearance by King’s College London MRA-23/24-40800
Contributing authors in alphabetical order:
Abbott, Y Pamela; Aicardi, Christine; Andrei, Oana; Ball, Brian; Bentley, Caitlin; Blaize Alfred, Lindy-Ann; Brown, Amy Aisha; Cruz Reyes, Gisela; Duarte, Tania; Garasto, Stef; Garforth, James; Klyshbekova, Maira; Naiseh, Mohammad.
Author’s Note
| This blog is part of the analysis of our forthcoming journal article and reflects the participatory approach we embraced in the project. In this study, participants were not just subjects but active collaborators in shaping the findings. By including them as co-authors, we recognise their valuable insights and their expertise. This approach highlights the importance of bridging the gap between research and communities and demonstrates our commitment to inclusivity and collective contribution to knowledge. |