Learner perceptions with AI tutors

by Aisling Third

The SAGE-RAI project , by the Open University’s Knowledge Media Institute and the Open Data Institute, is developing an AI Digital Assistant, along with ethical guidelines on the development and use of such a tool, to be integrated in the ODI’s online courses, allowing learners to question and explore course materials in situ, and receive personalised answers and revision activities. The motivation behind this is that one-to-one teaching is well-known to be extraordinarily effective – with up to two orders of magnitude better outcomes than group methods – but is also traditionally very expensive, and limited to elites. Enabling greater personalisation of education for everyone is therefore an important goal, and AI has a significant potential to augment the capacity of human tutors to help with this.

Of course, reflecting on possible unintended outcomes is a fundamental part of RRI, and we are extremely aware of the narrative that AI tools like this might be used to replace human roles rather than augmenting them. Early feedback from learners and tutors so far indicates, however, that both groups value AI assistant support most for features which human tutors don’t, and can’t be expected to, have; always being available and not tired of going over the same material repeatedly, or not judging “stupid” questions. It’s interesting to consider the role of perception here: human tutors may in reality welcome all kinds of question without judgment, but if learners self-censor what they ask from fear of being judged, the effect is the same, and there may be benefit from AI with regard to that perception alone.

This element of perception is something we are exploring further. For example, might a perceived non-judgmental AI lead learners to be more open about pastoral/personal issues, and if so, how do we, as researchers, handle this safely and ethically?