Understanding Global Content

Understanding Global Content

Natural languages are the most intuitive medium for human-machine communication.  Our vision is to contribute to the understanding of the use of language in human thought and communication and thereby to achieve truly effective, frictionless human-human and human-machine interaction and collaboration through natural language.  To achieve this goal, computers should not only understand the physical world the speaker refers to, including the objects, relations, events, times, and spaces, but also understand the speakers’ minds as well, including the intentions, attitudes, sentiments and emotions.  The computer should be able to interact with humans using his/her native languages in the way of text, speech, image and video, including helping the user to find and extract information from the Internet, summarize that information, answer user's questions and take actions according to user's requests. This ability to be both informative and performative is a critical step forward.

Research team

Publications

Understanding Global Content

Plug and Play for a Transferrable Sense of Humour

  • Posted: 15 Jul 2018
  • Author: , Tony Veale
  • Publication: DAPI 2018 Distributed, Ambient and Pervasive Interactions: Technologies and Contexts - 6th International Conference
Conference

Storytelling by a Show of Hands: A framework for interactive embodied storytelling in robotic agents

  • Posted: 5 Apr 2018
  • Author: , Tony Veale, Phillip Wicke
  • Publication: AISB 18 - the Conference on Artificial Intelligence and Simulated Behavior
Journal Article

Tweet dreams are made of this: Appropriate incongruity in the dreamwork of language

  • Posted: 1 Jul 2017
  • Author: , Veale, T., Valitutti, A.
  • Publication: Lingua
Understanding Global Content

A Novel Approach to Dropped Pronoun Translation.

  • Posted: 12 Jun 2016
  • Author: Longyue Wang, Andy Way, Zhaopeng Tu, Xiaojun Zhang, Hang Li
  • Publication: NAACL HLT 2016

Research Goals

We analyse, annotate and extract meaningful information and knowledge from textual content across multiple languages and domains. We develop a range of robust, domain-agnostic linguistic analysis tools, which can be applied to any language and which are informed by cues from non-linguistic sources.

We aim to develop the theories and technologies to understand the digital context through languages in the following three layers:

  • Understanding the language forms and structures, including the morphology, syntax, semantics and discourse.  This research focuses on the base language technologies by utilising the state-of-the-art machine learning and deep learning approaches to obtain cross-lingual, cross-domain and cross-modal content representations and improve the morphological, syntactic and semantic analysis performance for digital content.
  • Understanding the physical world through languages, including objects, relations, events, times, and spaces.  This research focuses on using representations for reasoning on language styles, events and topics. Researchers will advance machine learning techniques for content-based analysis by focusing on co-reference to entities and events at varying granularity, along with devising question-answering technology for events and novelty-detection technology for monitoring topics/events.
  • Understanding the human minds through languages, including the speaker's intentions, attitudes, the sentiments and emotions.  This research addresses the mechanics of variability in both the analysis and generation of digital content. Specific to this work is the notion that multiple layered meanings of words and word-phrases can vary not just because of the language, but the speaker, the medium and the context. Detecting changes in the meaning of words over time or by domain, assisting the disambiguation of phrases and terms, and the detection of meaning over larger structures than words alone.

Newsletter

Sign up to our newsletter for all the latest updates on ADAPT news, events and programmes.
Archived newsletters can be viewed here.