The recent Human+ Tech Talk held on 1 Nov, 2022, focussed on the topic ‘In Tech We Trust: Embedding Human Values and Ethical Standards Into AI’, and was led by Human+ programme fellow, Dr Nicola Palladino. Dr Palladino’s research involves the safe regulation of online environments by developing ethical artificial intelligence (AI) frameworks. He was joined by a panel of interdisciplinary experts from industry, academia, and governance who shed light on the topic at the seminar.
The panel comprised the following leaders: Prof Dave Lewis, Associate Professor, Trinity College Dublin, and Deputy Director of the SFI ADAPT Research Centre, with a research focus in data protection and data ethics; Dr Kenneth McKenzie, Research Portfolio Lead, Human Sciences Studio, Accenture, who has worked extensively in understanding the human and societal impacts of technology; Dr David Filip, Chair of NSAI TC 02/SC 18 AI and Convenor of ISO/IEC JTC 1/SC 43/WG 3 AI Trustworthiness, who has a strong expertise in ICT standardisation.
Dr Palladino paved the way for insightful discussions on the topic by first giving an overview of the ethical AI landscape. He outlined the hybrid governance model of trustworthiness in AI led by the larger industry landscape, organisations within those industries and specific teams who lead relevant projects. The task of instilling trust is a shared responsibility, with multiple entities and stakeholders playing a variety of roles.
The discussion was carried forward by Prof Dave Lewis from a regulatory standpoint. Prof Lewis illustrated the complex landscape of ethical AI cluttered not just by numerous legislations and frameworks but also a wide range of stakeholders who are involved in enhancing the trustworthiness of AI. Trustworthiness will depend on how effectively various stakeholders communicate with each other, he emphasised, thus helping us understand what the AI provider of the future might look like. He also called for further discussions to find out the missing elements of the AI Act (a proposed common regulatory and legal framework by the EU) and other standardisation policies to help us constantly learn, analyse, and improvise.
Dr Kenneth McKenzie then approached the topic from an enterprise angle. For citizens to trust AI, they first need to understand it. Sometimes news articles refer to things as AI, when in fact the technology used is simply data analytics with some inferential statistics. He suggested the need for cognitive anchors in the form of age-old systems to help citizens understand AI better. For instance, court proceedings do not necessarily help people understand tax laws, but they help them understand when they might be breaking them. Similarly, court proceedings might help citizens analyse what is right and wrong when it comes to AI too. “We will only make progress, when we make use of these cognitive anchors to help people understand AI better,” he emphasised.
This was followed by Dr David Filip’s talk from an ICT standardisation perspective. As a member and leader on several technical committees of ethical AI projects, Dr Filip shed light on the day-to-day experiences of professionals in the area. The task isn’t as simple as bureaucrats telling the technical committee – AI must not violate human rights. Human rights are a complex subject which need expert discussion to consider, navigate and comply. Besides, technical professionals cannot be expected to provide diplomatic and political solutions. This is the reason there needs to be dialogue. He also went on to highlight the dangers of pushing regulations too fast without understanding the implications. “Such legislation will undermine trust instead of increasing it,” he said.
The panel then answered some pressing questions from the moderator on the topic, further illustrating on the one hand the importance of involving public discourse in this domain, and on the other, the danger of ethics washing. Nevertheless, the panel agreed on the need for more interdisciplinary discussions to be carried out in this area, to keep informing and transforming AI regulatory frameworks presently and in the future.
The next and final Human+ Tech Talk will explore AI-enhanced personalisation in online tutoring systems led by Human+ programme fellow, Dr Qian Xiao, and an expert interdisciplinary panel.
Human+ is a five-year international, inter and transdisciplinary fellowship programme conducting ground-breaking research addressing human-centric approaches to technology development. Human+ is led by ADAPT, the Science Foundation Ireland Centre for Digital Content Innovation, at Trinity College Dublin, and the Trinity Long Room Hub Arts and Humanities Research Institute. The HUMAN+ project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 945447. The programme is further supported by unique relationships with HUMAN+ Enterprise Partners.
Interesting read in @RTEBrainstorm by @AdaptCentre Dr. Mani Dhingra, Digital Twin Ecosystem Manager @MaynoothUni @smartdublin @AphraK @scienceirel