Ever since its conception in the last century, artificial intelligence (AI) technology has gone through various stages of evolution. Today, state-of-the-art AI technology is growing unbridled, ruling the roost of various services and opportunities that have become fundamental to everyday life, from jobs and mortgages to healthcare and sustainable development.
A necessary and robust body of work on the efficient governance of such technology is being led by various international and interdisciplinary bodies. Comprising frameworks, policies and guidelines, this body of work encompasses legal, social, ethical and policy issues around AI. With a large volume of such work being released in the recent past, questions are emerging around the diversity of views expressed, especially regarding the influence of the Global North or Euro-American perspectives.
New research by ADAPT computer scientist & Deputy Director, Dave Lewis; Research Fellow, P. J. Wall and PhD student, Cathy Roche, examines a significant corpus of AI ethics and policy-related literature, with a focus on the role of underrepresented groups in the wider AI discourse. This expands on previous analysis of largely grey literature conducted by ADAPT, which had discovered blind spots regarding both gender representation and perspectives from the Global South.
The new research finds that voices from the Global South and consideration of alternative ethical approaches are largely absent from the conversation on ethical policies and strategies. In light of the prominence of social, cultural and ethical perspectives from the Global North, this research explores its implications for the development of standards for ethical AI. The research concludes by offering approaches to incorporate more diverse ethical viewpoints and beliefs. Consequently, it calls for increased consideration of power structures when developing AI ethics policies and standards within these alternative socio-cultural and socio-economic contexts.
As AI innovations have demonstrated worldwide, they have a phenomenal potential to address many issues considered otherwise intractable, including those highlighted by the United Nations Sustainable Development Goals. Nevertheless, this technology also comes with issues such as semantic biases in machine learning, machine ethics, gender biases and the role of AI in enabling disinformation across social media. This is the reason, relevant frameworks, policies and strategies in ethical AI need to be reviewed systematically to ensure that biases are not carried forward but instead mitigated for the benefit of all. To this effect, ADAPT research plays an important role in developing the guidelines and approaches for involving more diverse ethical perspectives in the area.
Continue reading the full research paper published in the AI and Ethics Journal, here.