More ways to listen

AI is developing at such a rapid pace that we can get caught up in its potential capabilities and role in our future. However, there are still a lot of issues to rule out. 

ADAPT recently hosted the Annual Scientific Conference 2024 in Dublin and today we’re hearing one of the keynote speakers, Abeba Birhane. We learn about the potential dangers of large-scale datasets, such as AI hallucinations and the reinforcement of societal biases and negative stereotypes. She also explored strategies for both incremental improvements and guiding broader structural changes in AI.

Our expert guest has been exploring strategies for both incremental improvements and guiding broader structural changes in AI. She is Senior Advisor for AI Accountability at Mozilla, Adjunct Professor at Trinity College Dublin and new ADAPT member, Abebe Birhane. 



How rumours of autonomous AI distract from real issues 

Hallucinations creating factually incorrect information 

AI ownership giving power to the hands of the few 

Data issues with collection, copyright and biases 

Creating standards for the safe use and development of AI 


Abeba Birhane is a cognitive scientist, currently a Senior Advisor in AI Accountability at the Mozilla Foundation and an Adjunct Assistant Professor in the School of Computer Science and Statistics at Trinity College Dublin (working with Trinity’s Complex Software Lab).

She researches human behaviour, social systems, and responsible and ethical artificial intelligence and was recently appointed to the UN’s Advisory Body on AI. Abeba works at the intersection of complex adaptive systems, machine learning, algorithmic bias, and critical race studies. In her present work, Abeba examines the challenges and pitfalls of computational models and datasets from a conceptual, empirical, and critical perspective.

Abeba Birhane has a PhD in cognitive science at the School of Computer Science, UCD, and Lero, The Irish Software Research Centre. Her interdisciplinary research focused on the dynamic and reciprocal relationship between ubiquitous technologies, personhood, and society. Specifically, she explored how ubiquitous technologies constitute and shape what it means to be a person through the lenses of embodied cognitive science, complexity science, and critical data studies.

Her work with Vinay Prabhu uncovered that large-scale image datasets commonly used to develop AI systems, including ImageNet and 80 Million Tiny Images, carried racist and misogynistic labels and offensive images. She has been recognised by VentureBeat as a top innovator in computer vision.