When I read about the scope and structure of the Human+ programme, I knew it was exactly what I needed to boost my research. As a political scientist focused on Internet governance and human rights in the digital sphere, I think a human-centric approach is key to addressing the challenges posed by the disruptive rise of artificial intelligence systems (AIs).
Why so? Well, AIs are complex socio-technical systems, meaning that their technical specifications can influence or discipline human behaviour in numerous ways. At the same time, they are shaped by several social factors such as the assumptions of their creators, organisational culture or the behavioral data on which they are based. It’s become clear that we need instruments to understand the social and political implications of these technical features, and to ensure that their design embeds legitimised social values.
My Human+ project takes up this challenge by building a state-of-the-art toolkit with recommendations and frameworks for developing an ethical approach to AI integrating social and technical mechanisms of governance. In the past few years, we’ve witnessed a flourishing of ethical initiatives to deal with the concerns raised by AI technologies, but there remains a “principle-to-practice” gap. Stakeholders are now increasingly aware that the full potential of this technology is attainable only by building a trustworthy and human-centric framework, meaning that AI systems must be aligned with human values and governed through accountable arrangements to avoid misuse (or indeed underuse due to a lack of public acceptance).
But it is a very hard task. Practitioners are struggling to translate governance principles into operational routines, and on the other side, mere technical tools risk missing actual needs and concerns coming from society in practical contexts by following some mathematical formalisation of ethical problems.
In this regard, what I liked most in the Human+ fellowship was the unique opportunity to work under the guidance of a supervisory team including a scholar from social sciences, oneanother from computer sciences, and a supervisor from a secondment organisation beyond academia. I believe that this interdisciplinary and inter-sectoral approach is necessary to handle complex socio-technical systems such as AI by bridging and integrating different understandings and competencies. The support of my principal supervisor Professor Blanaid Clarke, a leading scholar in corporate governance, law, and ethics, has been crucial in dealing with ethical management and accountability in tech companies. The expertise of Professor David Lewis on artificial intelligence and standardisation processes provide me with the necessary support to understand the meaning and effects of determinate technical choices in designing AI systems.
“The Human+ programme has provided me with priceless support to move forward with my research and to improve my professional profile.”
Furthermore, in the next months, I’ll be mentored by an enterprise partner involved in the development and deployment of AI systems and have a secondment at their organisation. This dialogue with a leading company in the development of responsible AI solutions, will give me a deeper understanding of the concrete processes of AI design and management from an organisational point of view, and it will be an opportunity to bring my knowledge and expertise into an operative context.
The Human+ programme has provided me with priceless support to move forward with my research and to improve my professional profile. I’ve been introduced to a vibrant community of scholars and experts and involved in a dense agenda of initiatives, mainly revolving around the two research centres partnering in the project: the Trinity Long Room Hub Arts and Humanities Research Institute and the ADAPT Centre for AI-Driven Digital Content Technology. Being based in the Trinity Long Room Hub, I’ve had the chance to engage in daily conversations with humanities scholars and researchers working in the fields of computer science and digital technologies. The weekly coffee mornings have allowed me to get to know people interested in similar research questions, and get chatting about the approaches and methods they themselves use in dealing with digital technologies.
The involvement of ADAPT has been of huge benefit as well. Thanks to its transparent digital governance research strand, I’ve been able to join interdisciplinary working groups committed to designing new multi-stakeholder governance practices, supporting organisations to balance responsibility with value extraction.
The Human+ programme has also introduced me to the standard-setting organisations system at national, European and international levels, and I’m currently involved in several working groups developing standards for trustworthy and human-centric AI. This has provided a unique opportunity to expand my network, learn about the most up-to-date development in the field, as well as to be part of international policy-making processes.
Thanks to Human+, I’ll be in a position to make a real contribution, I hope, to the development of a more ethical approach to AI.
This article is written by Dr Nicola Palladino, a Human+ programme fellow working at the Trinity Long Room Hub Arts and Humanities Research Institute, and ADAPT Centre of Excellence for AI-Driven Digital Content Technology at Trinity College Dublin, Ireland. His work focuses on questions that consider technological developments from the humanistic perspective.
Nicola will be holding a Human+ Tech Talk seminar to further explore this subject in-depth with an expert and interdisciplinary panel. To attend it on campus at the Trinity Long Room Hub or online, sign up with the link below.