Transforming Global Content

Transforming Global Content

Projects on Machine Translation (MT) modelling, MT training data scarcity and human factors are pivotal to extending research in MT, human translation and their business impact.

New deep learning techniques are being augmented with linguistic knowledge to constrain the MT decoding space explosion due to increasing model complexity. Cloud-based models seed MT engines built on-the-fly using small amounts of data targeted to the translational requirements of the input document. We extend our previous research on domain adaptation to new ADAPT sectors and data types using grounding semantics, filtering out ‘noisy’ input, and where data is in short supply, supplement parallel training data with comparable corpora. We also extend our previous ethnographic studies of real users of MT output, which will uncover cognitive and social barriers to MT acceptability. Novel evaluation schemes are being developed which meet industry needs for flexible, configurable quality measures that reflect directly their core organisational goals.

Research team


Transforming Global Content

A Comparative Quality Evaluation of PBSMT and NMT using Professional Translators

  • Posted: 1 Aug 2017
  • Author: Sheila Castilho, Joss Moorkens, Federico Gaspari, Pintu Lohar, Andy Way, Rico Sennrich, Vilelmini Sosoni, Yota Georgakopoulou, Antonio Valerio Miceli Barone and Maria Gialama
  • Publication: MT Summit XVI - 16th Machine Translation Summit

Tailoring Neural Architectures for Translating from Morphologically Rich Languages

  • Posted: 20 Aug 2018
  • Author: Andy Way, Peyman Passban, Qun Liu
  • Publication: COLING 2018 - 27th International Conference on Computational Linguistics
Journal Article

Questing for Quality Estimation. A user study

  • Posted: 31 May 2017
  • Author: Carla Parra Escartín, Hanna Béchara, Constantin Orasan
  • Publication: The Prague Bulletin of Mathematical Linguistics
Journal Article

¿Cómo ha evolucionado la traducción automática en los últimos años?

  • Posted: 4 Apr 2018
  • Author: Carla Parra Escartín
  • Publication: La Linterna del Traductor. La revista multilingue de ASETRAD

Research Goals

We provide MT with increased intelligence, by developing engines incorporating syntax, semantics and discourse features, constrained MT models using deep learning techniques, cloud-based data models for use by (disposable) MT engines, and engines for sentiment analysis and translation.

We connect texts with the real world, and investigate different ways to leverage grounding semantics (in contrast to abstract semantics), including named entities and relations, multimodality, and discourse semantics to improve translation quality in various scenarios. We use the state-of-the-art neural MT framework to incorporate grounding semantics and rich linguistic features.

Through a human-factors oriented approach, we seek to understand what the blocking points are, in order to overcome them. We take a cognitive ergonomic approach to this, which investigates three types of factors: cognitive (i.e. best presentation), physical (i.e. reduced editing effort), and organisational (i.e. best organisation for the adoption of MT).

10 pm
Karl Fitzpatrick interviews Andy Way from the ADAPT Centre via @YouTube


Sign up to our newsletter for all the latest updates on ADAPT news, events and programmes.
Archived newsletters can be viewed here.