ADAPT DCU-NLG members present research at 24th Meeting of the 16th International Natural Language Generation Conference

ADAPT researchers and DCU-NLG group members Rudali Huidrom, Michela Lorandi and Simon Mille recently presented their research work at the 24th Meeting of INLGxSIGDIAL (International Natural Language Generation Conference). This year, the 16th INLG conference was held jointly with the 24th SIGDIAL conference, in Prague, Czechia, from the 11th to 15th September. 

Michela Lorandi (PhD Student) presented the paper, co-authored with Prof. Anya Belz (Professor of Computer Science, Dublin City University), “Data-to-text Generation for Severely Under-Resourced Languages with GPT-3.5: A Bit of Help Needed from Google Translate”. The work focussed on how Large Language Models cope with tasks involving languages that are severely under-represented in their training data in the context of data-to-text generation for Irish, Maltese, Welsh and Breton. Access the paper here. The described system participated in the WebNLG shared task on generation of texts from DBpedia triples, where the system ranked first for the three languages for which outputs were submitted (Iris, Welsh, Maltese).

Simon Mille (Postdoctoral Fellow) presented the paper “Mod-D2t: A Multi-Layer Dataset for Modular Data-to-Text Generation”. This paper leverages the advantages of rule-based text generators to create large and reliable synthetic datasets with multiple human-intelligible intermediate representations. Co-authors include Prof. Anya Belz, Stamatia Dasiopoulou and Francois Lareau. Access the paper here. He also presented the other DCU-NLG submissions to the WebNLG shared task “DCU/TCD-FORGe at WebNLG’23:Irish rules!“, for which a fully rule-based generator of Irish was developed together with Elaine Uí Dhonnchadha (TCD), Stamatia Dasiopoulou, Lauren Cassidy (DCU), Brian Davis (DCU) and Prof. Anya Belz; the system ranked second for the Irish task. Access the paper here.

Rudali Huidrom (PhD Student) presented the paper “Towards a Consensus Taxonomy for Annotating Errors in Automatically Generated Text”. This paper focuses on reviewing existing research on meaning and content error types in generated text and attempts to identify emerging consensus among existing meaning/content error taxonomies. Co-authors include Prof. Anya Belz. Access the paper here.