ADAPT’s researchers are leading the way in open dialogue surrounding artificial intelligence in an international context.

Artificial intelligence research and technology is advancing at an ever-increasing pace. As researchers across the globe contribute to the understanding of what makes AI smart, it becomes ever-more important to examine the standards and language surrounding these advancements. If definitions surrounding ‘Artificial Intelligence’ vary, how can productive conversations be held?

Take the following question –  ‘How can we trust self-driving cars to have the knowledge necessary to make safe decisions?’ Can ‘knowledge’ be easily defined? What level of ‘trust’ is acceptable? Safe for whom?

Towards this, ADAPT researchers have been working not only within Irish borders but in an international context. ADAPT’s Prof. Dave Lewis has been central in establishing terminology that will facilitate international collaborations. These foundational AI terms and concepts include ‘artificial intelligence and ‘knowledge’ as well as several term definitions and concepts related to Natural Language Processing (NLP).

Colleague and ADAPT Spokes researcher Dr. David Filip is the convener of the sub-committee 42 (SC42) of the Joint Technical Committee of the International Organisation for Standardisation (ISO) and IEC. This group of over 380 experts from 31 national bodies works towards developing trustworthy AI, in recognition that these advances are a global conversation. 

Dr. Filip’s position as Ireland’s Head of Delegation on behalf of the National Standards Authority of Ireland (NSAI) and Chair of the Irish Committee for SC 42 ensures that he can co-ordinate between the most ground-breaking of advancements. Placed alongside ADAPT’s extensive academic and industry partnerships and collaborations, these researchers are merging relevant theoretical concerns with ever-developing, real-world technologies at a global level.

“We need to make sure that AI systems are trustworthy, technically robust, controllable, and verifiable over their entire lifecycle, wherever they are being deployed in the world. Many aspects including societal concerns, such as data quality, privacy, potentially unfair bias, and safety must be addressed.”, said Dr. Filip (further details can be found at https://etech.iec.ch/issue/2020-03/achieving-trustworthy-ai-with-standards

Prof. Lewis and Dr. Filip recently completed work on a technical report (ISO/IEC TR 24028: 2020) for the IEC and ISO joint technical committee (ISO/IEC JTC 1/SC 42) which enumerates the key topics for building the trustworthiness of AI systems. The primary goal of this report is to identify and address the gaps in current AI standards. SC 42 has outlined a number of goals that seek to examine societal and ethical concerns of AI in the coming years in relation to security and quality standards in place. This technical report significantly addresses the concern of predictability of AI, and how this should be designed to allow trust in the technology. By recognizing that these technologies can and will transcend borders, ADAPT’s researchers are leading the way in open dialogue surrounding artificial intelligence in an international context.