Transforming Global Content

Transforming Global Content

Projects on Machine Translation (MT) modelling, MT training data scarcity and human factors are pivotal to extending research in MT, human translation and their business impact.

New deep learning techniques are being augmented with linguistic knowledge to constrain the MT decoding space explosion due to increasing model complexity. Cloud-based models seed MT engines built on-the-fly using small amounts of data targeted to the translational requirements of the input document. We extend our previous research on domain adaptation to new ADAPT sectors and data types using grounding semantics, filtering out ‘noisy’ input, and where data is in short supply, supplement parallel training data with comparable corpora. We also extend our previous ethnographic studies of real users of MT output, which will uncover cognitive and social barriers to MT acceptability. Novel evaluation schemes are being developed which meet industry needs for flexible, configurable quality measures that reflect directly their core organisational goals.

Research team

Publications

Conference Paper

Finding Relevant Translations for Cross-lingual User-generated Speech Search

  • Posted: 1 Apr 2017
  • Author: Haithem Afli, Andy Way, Gareth Jones
  • Publication: WANLP 2017- Third Arabic Natural Language Processing Workshop
Conference Paper

Dublin City University at the TweetMT 2015 Shared Task

  • Posted: 1 Sep 2015
  • Author: Jinhua Du, Antonio Toral, Xiaofeng Wu, Tommi Pirinen, Zhengwei Qiu, Ergun Bicici
  • Publication: TweetMT 2015
Conference Paper

Semantics-Enhanced Task-Oriented Dialogue Translation: A Case Study on Hotel Booking

  • Posted: 27 Nov 2017
  • Author: Longyue Wang, Jinhua Du, Andy Way, Liangyou Li
  • Publication: 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)
Journal Article

A novel and robust approach for pro-drop language translation

  • Posted: 13 Jan 2017
  • Author: Longyue Wang, Andy Way, Zhaopeng Tu, Xiaojun Zhang, Siyou Liu, Hang Li
  • Publication: Machine Translation

Research Goals

We provide MT with increased intelligence, by developing engines incorporating syntax, semantics and discourse features, constrained MT models using deep learning techniques, cloud-based data models for use by (disposable) MT engines, and engines for sentiment analysis and translation.

We connect texts with the real world, and investigate different ways to leverage grounding semantics (in contrast to abstract semantics), including named entities and relations, multimodality, and discourse semantics to improve translation quality in various scenarios. We use the state-of-the-art neural MT framework to incorporate grounding semantics and rich linguistic features.

Through a human-factors oriented approach, we seek to understand what the blocking points are, in order to overcome them. We take a cognitive ergonomic approach to this, which investigates three types of factors: cognitive (i.e. best presentation), physical (i.e. reduced editing effort), and organisational (i.e. best organisation for the adoption of MT).

Twitter
4 pm
@Adaptcentre
Liam Cronin - Director of Commericalistion thanks ADAPT team for the great success and looks forward to a promising… twitter.com/i/web/status/1…

Newsletter

Sign up to our newsletter for all the latest updates on ADAPT news, events and programmes.
Archived newsletters can be viewed here.