Understanding Global Content

Understanding Global Content

Natural languages are the most intuitive medium for human-machine communication.  Our vision is to contribute to the understanding of the use of language in human thought and communication and thereby to achieve truly effective, frictionless human-human and human-machine interaction and collaboration through natural language.  To achieve this goal, computers should not only understand the physical world the speaker refers to, including the objects, relations, events, times, and spaces, but also understand the speakers’ minds as well, including the intentions, attitudes, sentiments and emotions.  The computer should be able to interact with humans using his/her native languages in the way of text, speech, image and video, including helping the user to find and extract information from the Internet, summarize that information, answer user's questions and take actions according to user's requests. This ability to be both informative and performative is a critical step forward.

Research team



Examining a hate speech corpus for hate speech detection and popularity prediction

  • Posted: 13 May 2018
  • Author: Filip Klubička, Raquel Fernandez
  • Publication: 4REAL Workshop 2018, Collocated with LREC2018

Storytelling by a Show of Hands: A framework for interactive embodied storytelling in robotic agents

  • Posted: 5 Apr 2018
  • Author: , Tony Veale, Phillip Wicke
  • Publication: AISB 18 - the Conference on Artificial Intelligence and Simulated Behavior

Beef Cattle Instance Segmentation using Convolutional Neural Network

  • Posted: 4 Sep 2018
  • Author: Robert Ross, John D. Kelleher, Aram Ter-Sarkisov, Bernadette Earley, Michael Keane
  • Publication: BMVC 2018 - British Machine Vision Conference
Transforming Global Content

ParFDA for Fast Deployment of Accurate Statistical Machine Translation Systems, Benchmarks, and Statistics

  • Posted: 1 Jul 2015
  • Author: Andy Way, Ergun Bicici
  • Publication: Tenth Workshop on Statistical Machine Translation

Research Goals

We analyse, annotate and extract meaningful information and knowledge from textual content across multiple languages and domains. We develop a range of robust, domain-agnostic linguistic analysis tools, which can be applied to any language and which are informed by cues from non-linguistic sources.

We aim to develop the theories and technologies to understand the digital context through languages in the following three layers:

  • Understanding the language forms and structures, including the morphology, syntax, semantics and discourse.  This research focuses on the base language technologies by utilising the state-of-the-art machine learning and deep learning approaches to obtain cross-lingual, cross-domain and cross-modal content representations and improve the morphological, syntactic and semantic analysis performance for digital content.
  • Understanding the physical world through languages, including objects, relations, events, times, and spaces.  This research focuses on using representations for reasoning on language styles, events and topics. Researchers will advance machine learning techniques for content-based analysis by focusing on co-reference to entities and events at varying granularity, along with devising question-answering technology for events and novelty-detection technology for monitoring topics/events.
  • Understanding the human minds through languages, including the speaker's intentions, attitudes, the sentiments and emotions.  This research addresses the mechanics of variability in both the analysis and generation of digital content. Specific to this work is the notion that multiple layered meanings of words and word-phrases can vary not just because of the language, but the speaker, the medium and the context. Detecting changes in the meaning of words over time or by domain, assisting the disambiguation of phrases and terms, and the detection of meaning over larger structures than words alone.
9 am
Come and join us @TLRHub for #LibTech showcase #LIBER2019 Tea/coffee, cakes and great demos on show! pic.twitter.com/1S6ao2ycNu


Sign up to our newsletter for all the latest updates on ADAPT news, events and programmes.
Archived newsletters can be viewed here.