Understanding Global Content

Understanding Global Content

Natural languages are the most intuitive medium for human-machine communication.  Our vision is to contribute to the understanding of the use of language in human thought and communication and thereby to achieve truly effective, frictionless human-human and human-machine interaction and collaboration through natural language.  To achieve this goal, computers should not only understand the physical world the speaker refers to, including the objects, relations, events, times, and spaces, but also understand the speakers’ minds as well, including the intentions, attitudes, sentiments and emotions.  The computer should be able to interact with humans using his/her native languages in the way of text, speech, image and video, including helping the user to find and extract information from the Internet, summarize that information, answer user's questions and take actions according to user's requests. This ability to be both informative and performative is a critical step forward.

Research team


Conference Paper

Who Framed Roger Rabbit? Multiple Choice Questions Answering about Movie Plot

  • Posted: 23 Oct 2017
  • Author: Daria Dzendzik, Carl Vogel
  • Publication: The Joint Video and Language Understanding Workshop: MovieQA and The Large Scale Movie Description Challenge (LSMDC 2017)
Conference Paper

Towards Evaluating the impact of Anaphora Resolution on Text Summarisation from a Human Perspective

  • Posted: 24 Jun 2016
  • Author: Mostafa Bayomi, Séamus Lawless, Killian Levacher
  • Publication: NLPAR2015
Conference Paper

Topic-Informed Neural Machine Translation

  • Posted: 13 Dec 2016
  • Author: Andy Way, Jian Zhang, Liangyou Li
  • Publication: COLING 2016
Conference Paper

The DCU Discourse Parser for Connective, Argument Identification and Explicit Sense Classification

  • Posted: 31 Jul 2015
  • Author: Longyue Wang, Tsuyoshi Okita and Xiaojun Zhang
  • Publication: Proceedings of The SIGNLL Conference on Computational Natural Language Learning (CoNLL2015)

Research Goals

We analyse, annotate and extract meaningful information and knowledge from textual content across multiple languages and domains. We develop a range of robust, domain-agnostic linguistic analysis tools, which can be applied to any language and which are informed by cues from non-linguistic sources.

We aim to develop the theories and technologies to understand the digital context through languages in the following three layers:

  • Understanding the language forms and structures, including the morphology, syntax, semantics and discourse.  This research focuses on the base language technologies by utilising the state-of-the-art machine learning and deep learning approaches to obtain cross-lingual, cross-domain and cross-modal content representations and improve the morphological, syntactic and semantic analysis performance for digital content.
  • Understanding the physical world through languages, including objects, relations, events, times, and spaces.  This research focuses on using representations for reasoning on language styles, events and topics. Researchers will advance machine learning techniques for content-based analysis by focusing on co-reference to entities and events at varying granularity, along with devising question-answering technology for events and novelty-detection technology for monitoring topics/events.
  • Understanding the human minds through languages, including the speaker's intentions, attitudes, the sentiments and emotions.  This research addresses the mechanics of variability in both the analysis and generation of digital content. Specific to this work is the notion that multiple layered meanings of words and word-phrases can vary not just because of the language, but the speaker, the medium and the context. Detecting changes in the meaning of words over time or by domain, assisting the disambiguation of phrases and terms, and the detection of meaning over larger structures than words alone.
4 pm
Liam Cronin - Director of Commericalistion thanks ADAPT team for the great success and looks forward to a promising… twitter.com/i/web/status/1…


Sign up to our newsletter for all the latest updates on ADAPT news, events and programmes.
Archived newsletters can be viewed here.