This research activity allows people to interact with digital content in ways that are more intuitive and that mimic the richness of human perception in all interaction. This research goes beyond text and speech-based exchange of content, to full multimodal interfaces that interpret information from a multitude of audio and visual cues. By furthering both the understanding and automatic analysis of human interaction with digital content and other humans, we are transforming the retrieval, understanding, and delivery of multimodal content for users.
In driving a more complete understanding of multimodal interaction between humans and for humans with digital content, we build automatic systems that track human engagement and affective response, and judge how best to retrieve and render responsive content for the user.