HUMAN+ Research Profile: Dr. Nicola Palladino, Ethical AI and the Challenges of Governance

27 October 2022

We all see Artificial Intelligence technology increasingly influencing the way we make decisions. Dr Nicola Palladino, HUMAN+ fellow with a background in Political Science, is interrogating how to address this influence. He is working on an ethical framework for AI to ensure the safe regulation of online environments.

 

Over the last 5 years analysing how new technology can affect human rights, Palladino has observed that, not only are digital technologies giving rise to novel policy fields, they are “creating a new layer of governance.” They now determine if certain courses of action are permitted, and influence choices by selecting and emphasising information on possible options. Frequently, digital technologies make decisions on crucial aspects of people’s life. This can range from recommendations on Spotify or Netflix, through misleading news stories shared over social media or the choice for the best medical treatment, the best bus route for a community, and on to selection procedures within education, workplace, and retail banking.

This emerging layer of governance makes traditional legal approaches insufficient. The more our social life is transferred into and influenced by the digital world, the more significance we should place on technical design and specifications, because this is where governance becomes effective. “We see this in echo chambers, influenced by algorithms, deciding what you see.” If laws are not written with the requisite level of technological knowledge and transposed into digital architectures, it will become increasingly difficult to enforce them, especially when trying to safeguard people’s rights, integrity, and autonomy.

In the last few years, Palladino explains, there has been a larger focus on Ethical AI. He highlights the “myth of objectification” that previously influenced approaches to digital governance: the belief that technology can make decisions without the bias, interests and misconceptions that affect humans. Now, it is becoming clear that technology, consciously or not, embodies the interests and values of the people that created it. These biases must therefore be interrogated and combatted in order to ensure equity and fairness within digital technologies. “We need to find a way to govern technology effectively, with trustworthy management, under the control of the wider community.”

However, current Ethical AI practices often reproduce the same problems science is trying to address. Palladino argues that technology often relies on automatic tools to measure and mitigate biases or other potential harms, when what is needed is stronger human mechanisms. In his words, “we are relying on algorithms to fix algorithms!” Palladino’s research focuses instead on how to embed ethical and human rights standards within AI systems’ socio-technical design, to ensure the primacy of human input.

During the first year of his fellowship, Palladino has been mapping out AI ethics initiatives, guidelines, and legislation, as well as the recent production of AI ethical tools, to comprehensively capture the current technological landscape. What he discovered was a mismatch between AI guidelines and their implementation, suggesting that there are currently deficiencies surrounding attempts to put AI Ethics into practice.

Where does this mismatch come from? Palladino argues one crucial aspect is that, when political discussions turn towards AI issues, there is little focus on the importance of making its operation trustworthy and human-centric. AI Ethics is still viewed primarily as a technological matter, approached with an engineering mindset. To properly address this gap, “we need social scientists to learn computer science, and computer scientists be aware and responsible for the social and political implication of their choice”. Initiatives to train engineers in AI are positive steps but still insufficient. We need more interdisciplinary approaches in education and professional figures capable of mastering the interaction between social processes and digital technologies. Indeed, Palladino believes that initiatives like HUMAN+ can pave the way for more integrated future pathways.

Palladino ultimately sees his research as the beginning of a “hybrid vision,” working with both governments and tech companies. “We need collaboration between the two because technical knowledge needs a legal framework provided by government,” and without technical knowledge, we won’t be able to combat the problems. While the EU is introducing AI legislation next year, much of the implementation will be left up to AI companies. The problem with this, Palladino argues, is “if you rely too much on one group they may end up working too much in their own interests or usual mindset, not necessarily maliciously. AI companies just have more understanding from a technical side.”

For now, this hybrid vision needs to start at the underlying infrastructural level. This requires inclusiveness in the system’s design, which means close interdisciplinary work with researchers not traditionally considered computer scientists: Palladino believes that “in the near future we need to create a new generation of scholars with skills and competencies from both Computer Sciences and the Social Sciences. We need people who understand the social implications of AI, and are able to embed human values into technical design. At the moment we totally lack this professional field, so it’s important that programmes like HUMAN+ enable these fields to interact.”

Having supervisors from different fields was one of the main reasons Palladino chose the HUMAN+ programme. With support from Prof Blanaid Clarke from the School of Law and Prof David Lewis from Computer Science and Statistics he has been introduced to novel situations and environments, including national and international standard setting AI initiatives. He is now in the position of using his knowledge to create an AI Ethical tool kit that can be applied to industry and government organisations alike. There is a lot of momentum for Computer Science in Dublin, with big tech companies and a huge concentration of expertise. He finds the city a vibrant place with a lot of opportunities for connections with various academic and tech organisations.

As with all Human+ fellows, Palladino will be seconded in a non-academic organisation, which in his case will be Accenture, a global professional services company with leading capabilities in digital, cloud and security, working across more than 40 industries and serving clients in more than 120 countries. Prof. Blanaid Clarke, Nicola’s supervisor from the humanities, highlights the transdisciplinarity of the project as one of its most important features:

This research is now crucial because we are at a cross road, Palladino argues. “We need mechanisms to ensure this technology is trustworthy.” In the future we will come to rely more on digital technology. If we are not careful implementing legislation regarding AI we may amplify inequalities already existing in society, as there will be more power concentrated within small, powerful groups, which could lead to unprecedented levels of control.

However, he does not wish to be pessimistic. He quotes Italian philosopher and political theorist Antonio Gramsci who believed in the “pessimism of the intellect, optimism of the will.” “Everything is in our hands and now is the time to act,” Palladino believes. While growing, AI Ethics is still at the margins of political debate, so it is crucial that it receives further focus. “The governance of digital technologies, jointly with the environment, are the big questions that will define our future and the future of humanity. It’s interesting to see how both require a combination of technological development and Social Sciences.”