New Advances in Trustworthy AI: Privacy, Cultural Reasoning, and LLM Robustness from MTU at LREC 2026

22 April 2026

New research from the ADAPT Centre’s Human-Centred AI (HAI) team at Munster Technological University (MTU) highlights significant advances in trustworthy artificial intelligence, spanning privacy-preserving methods, cultural reasoning in large language models (LLMs), and robustness to evolving knowledge.

A key contribution focuses on the operationalisation of the “right to be forgotten” in LLMs, an increasingly important requirement in the context of AI governance and data protection. In the paper “Operationalising the Right to be Forgotten in LLMs: A Lightweight Sequential Unlearning Framework for Privacy-Aligned Deployment in Politically Sensitive Environments”, Esen Kurt and Haithem Afli propose a practical and efficient approach to aligning LLM behaviour with core privacy principles. Their lightweight sequential unlearning framework distinguishes between retention and suppression objectives and enables the controlled removal of sensitive knowledge while preserving overall model performance.

This work is part of a broader research agenda within the ADAPT HAI team at MTU, which focuses on explainability, cultural awareness, and ethical AI deployment in high-stakes and socially sensitive domains. The paper will be presented at the PoliticalNLP 2026 workshop, a leading venue dedicated to the intersection of NLP, political science, and computational social science.

In parallel, the team has multiple papers accepted at the LREC 2026 main conference, further reinforcing its leadership in human-centred NLP research:

  • “CRaFT: An Explanation-Based Framework for Evaluating Cultural Reasoning in Multilingual Language Models” (Shehenaz Hossain and Haithem Afli) introduces a novel evaluation paradigm that assesses how LLMs reason across diverse cultural contexts through explanation-based analysis.
  • “Dynamic Model Switching to Mitigate Outdated Knowledge in Large Language Models” (Ramakrishna Pinninti, Sabyasachi Kamila, Ayan Mazumder, and Mohammed Hasanuzzaman) explores adaptive mechanisms to address knowledge staleness, enabling LLMs to remain reliable in dynamic, real-world environments.

Together, these contributions illustrate a comprehensive approach to trustworthy AI, addressing privacy, interpretability, and robustness as interconnected challenges in the design and deployment of next-generation language technologies.

The PoliticalNLP 2026 workshop, co-located with LREC 2026 in Palma de Mallorca, centres on the theme “Trust, Transparency, and Generative AI in Political Discourse”. PoliticalNLP 2026 is co-organised by an international team of researchers, with Dr Haithem Afli serving as General Chair, reflecting MTU’s and ADAPT’s strong leadership in this rapidly evolving interdisciplinary field. Under his leadership, the workshop continues to expand as a key venue addressing the societal and ethical implications of AI, bringing together perspectives from NLP, media studies, law, and governance.

As highlighted in the workshop preface, PoliticalNLP has become an important platform for advancing research on bias, explainability, misinformation, and the integrity of democratic discourse in AI systems.