Understanding Global Content

Understanding Global Content

Natural languages are the most intuitive medium for human-machine communication.  Our vision is to contribute to the understanding of the use of language in human thought and communication and thereby to achieve truly effective, frictionless human-human and human-machine interaction and collaboration through natural language.  To achieve this goal, computers should not only understand the physical world the speaker refers to, including the objects, relations, events, times, and spaces, but also understand the speakers’ minds as well, including the intentions, attitudes, sentiments and emotions.  The computer should be able to interact with humans using his/her native languages in the way of text, speech, image and video, including helping the user to find and extract information from the Internet, summarize that information, answer user's questions and take actions according to user's requests. This ability to be both informative and performative is a critical step forward.

Research team



A Social Opinion Gold Standard for the Malta Government Budget 2018

  • Posted: 4 Nov 2019
  • Author: , Cortis, Keith and Davis, Brian
  • Publication: W-NUT 2019 - 5th Workshop on Noisy User-generated Text
Understanding Global Content

IronyMagnet at SemEval-2018 Task 3: A Siamese network for Irony detection in Social media.

  • Posted: 5 Jun 2018
  • Author: , Tony Veale
  • Publication: SemEval 2018 - 12th International Workshop on Semantic Evaluation
Book Chapter

From Conceptual Mash-ups to Badass Blends: A Robust Computational Model of Conceptual Blending

  • Posted: 26 Jul 2019
  • Author: , Tony Veale
  • Publication: Computational Creativity. Computational Synthesis and Creative Systems

Inferential models of mental workload with defeasible argumentation and non-monotonic fuzzy reasoning: a comparative study

  • Posted: 20 Nov 2018
  • Author: , Rizzo L., Longo L.
  • Publication: AI³ 2018 - 2nd Workshop on Advances In Argumentation In Artificial Intelligence

Research Goals

We analyse, annotate and extract meaningful information and knowledge from textual content across multiple languages and domains. We develop a range of robust, domain-agnostic linguistic analysis tools, which can be applied to any language and which are informed by cues from non-linguistic sources.

We aim to develop the theories and technologies to understand the digital context through languages in the following three layers:

  • Understanding the language forms and structures, including the morphology, syntax, semantics and discourse.  This research focuses on the base language technologies by utilising the state-of-the-art machine learning and deep learning approaches to obtain cross-lingual, cross-domain and cross-modal content representations and improve the morphological, syntactic and semantic analysis performance for digital content.
  • Understanding the physical world through languages, including objects, relations, events, times, and spaces.  This research focuses on using representations for reasoning on language styles, events and topics. Researchers will advance machine learning techniques for content-based analysis by focusing on co-reference to entities and events at varying granularity, along with devising question-answering technology for events and novelty-detection technology for monitoring topics/events.
  • Understanding the human minds through languages, including the speaker's intentions, attitudes, the sentiments and emotions.  This research addresses the mechanics of variability in both the analysis and generation of digital content. Specific to this work is the notion that multiple layered meanings of words and word-phrases can vary not just because of the language, but the speaker, the medium and the context. Detecting changes in the meaning of words over time or by domain, assisting the disambiguation of phrases and terms, and the detection of meaning over larger structures than words alone.
6 pm
Congratulations to @LindaDoyle on making history @tcddublin! Looking forward to you leading on #ImagineTrinity over… twitter.com/i/web/status/1…


Sign up to our newsletter for all the latest updates on ADAPT news, events and programmes.
Archived newsletters can be viewed here.