Over the past year, remote work, decreased travel, virtual meetings and social distancing have distanced us from ‘real’ human contact. Forced out of in-person experiences we have had to rely heavily on video conferencing to maintain interactions with friends, family and business associates. Maintaining natural interactions in a virtual setting is challenging as there are fewer opportunities for mutual social exchanges. So how can we connect better in a virtual space and can technology improve collaboration and communication?
Recently, there have been huge technological advances in real-time computer graphics, providing sophisticated interactions between people, using highly realistic avatars and virtual and augmented reality (AR/VR). These AR/VR devices allow for content to be truly 3D and immersive where people can cohabit a room with a digital representation of their friend, partner, or relative and be able to interact, talk and even touch, while being in different parts of the world in real-life.
These systems are no longer science fiction but the current state-of-the-art is still lacking in terms of avatar realism and their ability to reproduce subtle human motions and emotions. This means that even though we can co-exist in a virtual space, our avatars might not have the necessary capabilities for conversation with other virtual humans. Maybe they don’t look enough like us, or their appearances are too cartoon-like to have a serious conversation, or they can’t reproduce our face and body emotions sufficiently, resulting in information loss during communication.
Mimicking our Unique Expressions
My research focuses on computer graphics and aims to make the appearance and motion of these types of virtual avatars more realistic, through perceptual experiments and developing new algorithms based on real human movement. Recently, my research group ran experiments to test the use of personalised avatars in video-conferencing for effective communication. We found that using virtual representations in video-conferencing proved a positive experience for most users and reduced ‘zoom-fatigue’, allowing people to be less focused on their appearance while on-call. We also developed new algorithms for creating virtual characters that can mimic our unique expressions helping to improve communication.
Speech to Gesture
My research has also led me to investigate photorealistic embodied conversational agents. These are basically virtual humans that look real and can converse with you using artificial intelligence. These agents can be used in a range of scenarios, for example to improve training in healthcare, education, and sales. However, humans are very effective communicators and can notice small irregularities in motion or behaviours of virtual humans, making the task of automating their behaviours difficult. My research group has developed new algorithms for automating the gesture behaviours for these characters, so that they can produce appropriate hand gestures during conversation, which is a surprisingly difficult task to automate. Our work in this area has been published at the top venues for intelligent virtual agents, and we have begun investigating the commercial feasibility of such a system.
It is undeniable that it is different in person than over a computer. My research is helping drive innovations that will augment virtual environments and help participants engage in a more meaningful and realistic way. It is my hope that these new virtual experiences can preserve or even strengthen our interpersonal and professional connections for a future that will rely more on remote interactions.
Rachel McDonnell received her BA(Mod) and PhD from Trinity and joined the School of Computer Science and Statistics as a lecturer in 2011. She is now Associate Professor of Creative Technologies and a principal investigator with ADAPT, Trinity’s Centre for AI-driven Digital Content Technology and was elected a Fellow of Trinity College Dublin in 2020. The recipient of a prestigious SFI Career Development Award, and a recent Frontiers for the Future grant, she has published almost 100 articles in peer-reviewed conferences and journals. Her research focuses on building plausible and appealing virtual humans.
This piece is also published in the 2011 – 2021 Provost Retrospective Review
Our partners @ait_research @TUS_ie will be an #AI hub for our #DiscussAI initiative. Join the #scienceweek event this Nov 11th. More info: http://thinkins.adaptcentre.ie/events/ @CaitMordan @atimcluster
Read more: https://bit.ly/30GkhUe