Interacting with Global Content

Interacting with Global Content

This research activity allows people to interact with digital content in ways that are more intuitive and that mimic the richness of human perception in all interaction. This research goes beyond text and speech-based exchange of content, to full multimodal interfaces that interpret information from a multitude of audio and visual cues. By furthering both the understanding and automatic analysis of human interaction with digital content and other humans, we are transforming the retrieval, understanding, and delivery of multimodal content for users.

In driving a more complete understanding of multimodal interaction between humans and for humans with digital content, we build automatic systems that track human engagement and affective response, and judge how best to retrieve and render responsive content for the user.

Research team

Publications

Interacting with Global Content

  • Posted: 5 Aug 2018
  • Author: Emer Gilmartin, Christian Saam, Brendan Spillane, Maria O'Reilly, Ketong Su, Arturo Calvo, Loredana Cerrato, Killian Levacher, Nick Campbell and Vincent Wade
  • Publication: LREC 2018 - 11th International Conference on Language Resources and Evaluation
Conference

A Chaotic Approach on Solar Irradiance Forecasting

  • Posted: 20 Dec 2019
  • Author: , T. A. Fathima, Vasudevan Nedumpozhimana, Yee Hui Lee, Stefan Winkler, and Soumyabrata Dev
  • Publication: PIERS 2019 - Progress In Electromagnetics Research Symposium
Conference

Social Presence and Place Illusion Are Affected by Photorealism in Embodied VR

  • Posted: 1 Oct 2019
  • Author: , Zibrek, Katja, McDonnell, Rachel
  • Publication: ACM SIGGRAPH Conference on Motion, Interaction, and Games
Interacting with Global Content

Is photorealism important for perception of expressive virtual humans in VR?

  • Posted: 1 Sep 2019
  • Author: , Katja Zibrek, Sean Martin, Rachel McDonnell
  • Publication: ACM Transactions on Applied Perception

Research Goals

New methods are being developed to process both speech-only and audio-visual data, and train statistical engines to infer attentional state. The ability to track user engagement and interest in conversational interaction is key to reproducing natural interactions in the future, be that with a robot, personal assistant or avatar. The research explores what makes an avatar, or computer generated speaker, engaging to a user. The research uniquely combines ADAPT expertise on expressive synthesis, the role of paralinguistic cues in speech, and avatar animation.

Also addressed are issues of multimodal content relevant to interaction from two perspectives: The first addresses challenges of locating and isolating objects of interest in a visual stream and exploiting visual cues in speech to augment speech recognition capabilities. Learning techniques from unstructured multimodal data streams are also examined. The second is addressed by establishing methods for the exploitation of dialogue in user interaction in information retrieval, and to the exploitation of context to enable proactive information retrieval.

Twitter
12 pm
@Adaptcentre
We've just launched the new series of #ADAPTRadio hosted by @donalscannell! This series will focus on #startup proj… twitter.com/i/web/status/1…

Newsletter

Sign up to our newsletter for all the latest updates on ADAPT news, events and programmes.
Archived newsletters can be viewed here.