The techniques involved in AI have become increasingly complicated, making it more and more challenging for us to explain the work we do as researchers in the field. How do we make our research intelligible to a non-specialist audience? How do we avoid having people switch off the minute we start speaking?
One of the benefits of the Human+ programme is the opportunity it gives us to communicate the work we do to a wider audience. There are courses and talks to help with building communication and public speaking skills, but, unlike many other research programmes, we also have a platform here to reach people from a broad range of research backgrounds. I was able to introduce my data-driven methodology in front of a crowd of humanities researchers at our launch event, for example, and we’ve all given talks in the Trinity Long Room Hub Arts and Humanities Research Institute to a general audience. The Human+ team has helped us with preparation and given constructive feedback – a hands-on approach that’s ensured we really think through the best way of communicating to specialists in different fields.
Being based at the ADAPT Centre e for AI-Driven Digital Content Technology has also given me the chance to connect with people at the forefront of AI research, whether at conferences, seminars or more informal gatherings. What’s been particularly eye-opening is the extent to which TCD researchers are using technology to advance real-world solutions. The exciting work that’s going on inspired me to sign up to a course on innovation pathways at Tangent, a TCD workspace that fosters student entrepreneurship. Here, I’ve seen example after example of research work being converted into tangible innovative products. AI has the potential to transform the way we respond to some of the world’s biggest challenges, and my coursework at Tangent has helped me recognise the value of my research in tackling these bigger economic and social problems.
Since beginning my Human+ fellowship, I’ve been working under the mentorship of Vincent Wade of the School of Computer Science and Statistics and Keith Johnston of the School of Education to develop AI techniques to help improve educational practices. In education, assessment frameworks still vary considerably, with results also largely depending on individual instructors and local schools. So can we design less biased forms of assessment, overriding differences in terms of provided materials, learning strategies and teachers’ own personal judgement? That’s what my Human+ project is about in a nutshell – using AI to deepen our understanding of students’ work.
One of the drawbacks of conventional quantitative assessment methods is that they measure students’ ability at a certain time using standardised testing materials. What’s different about our approach is the continuous and long-term assessment of progress at a fine-grained level, with students freely picking exercises and learning at their own pace. With datasets covering more than 13,000 questions and 180 types of knowledge in language learning, our project can effectively infer students’ behaviour and performance patterns on a massive scale, allowing teachers to spot those who are at high risk of failing, as well as those with great potential to improve.
The aim here is to support students to learn and thrive, and help instructors better understand their students through AI algorithms. In that sense it also speaks to the wider motivation of the Human+ programme – developing human-centred technology to facilitate meaningful interaction between humans and machines. I’m glad to have found a programme that places such value on these interactions.
This article is written by Dr Qian Xiao, a Human+ programme fellow working at the Trinity Long Room Hub Arts and Humanities Research Institute, and ADAPT Centre of Excellence for AI-Driven Digital Content Technology at Trinity College Dublin, Ireland. Her research is focused on the development of Computing Technology with insights informed by Arts and Humanities