ADAPT Researchers Publish New Insights on Building Organisational Trust in AI Deployments

18 June 2025

A scoping review by researchers from ADAPT and the Centre for Innovative Human Systems at University College Dublin, Trinity College Dublin and St James’s Hospital explores how to make artificial intelligence (AI) truly trustworthy in real-world organisational settings.

In a new publication in the highly respected journal Ergonomics , an interdisciplinary team led by ADAPT academics, Asst. Prof Rob Brennan (UCD) and Prof. Nick McDonald (TCD), has delivered the first systematic exploration of the link between trustworthy AI (TAI) and organisational trust using socio-technical systems analysis.  This breakthrough work addresses a crucial, yet under examined, question: how can we build and maintain trust in AI systems within complex organisational environments?

The paper, titled “Trustworthy Artificial Intelligence and Organisational Trust: A Scoping Review Using Socio-Technical Systems Analysis”, stems from collaborative research across ADAPT, St. James’s Hospital, and the Centre for Innovative Human Systems (CIHS) at Trinity’s School of Psychology.  It is rooted in practical healthcare challenges, specifically deploying AI systems to assist infection control, and speaks directly to the global implementation gap between high-level TAI guidelines and real-world AI adoption.

Speaking about the research, Dr Rob Brennan said: “Organisational trust is critical for delivering real productivity gains from both people and technology. We wanted to look beyond the principles of trustworthy AI, which implicitly assume that these are sufficient conditions for trust in AI systems, to understand what truly builds, sustains, and repairs trust in dynamic, complex environments like hospitals.”

As part of the research, the team conducted a structured review of 803 academic papers, synthesising insights from 54 studies that addressed AI trustworthiness and organisational implementation.  Their novel application of STSA revealed significant gaps in the current TAI literature, including the lack of longitudinal perspectives, contextual analysis, and organisational maturity considerations in AI deployment strategies.

Rather than treating “trust” as an outcome of conforming to AI ethics checklists, the review emphasises trust as an evolving organisational asset and one that must be actively cultivated across teams, systems, and timeframes.

While the European Commission’s guidelines on Trustworthy AI have advanced the conversation, this paper argues that implementation success hinges on mechanisms that foster organisational trust, not just compliance.

Professor Nick McDonald said: “This work shifts the lens from theoretical design requirements to applied socio-technical integration focusing on understanding how people, processes, and technologies interact, and how those interactions can erode or reinforce trust in AI.”[DRAFT]

The full paper is available open access: https://doi.org/10.1080/00140139.2025.2512426

The themes of the paper are being discussed at the HEPS 2025 conference in TCD this week.

The research was conducted by Rebecca Vining (ADAPT), Saijad Karimian (d-real Centre for Research Training), Nick McDonald (Centre for Innovative Human Systems, TCD), Malick Ebiele (ADAPT), Brian Doyle (Centre for Innovative Human Systems, TCD), Lucy McKenna (ADAPT), Marie E. Ward Centre for Innovative Human Systems, TCD), Malika Bendechache (Lero and ADAPT). Rob Brennan (ADAPT).