Interpreting the EU AI Act for Industry

Helping organisations navigate the various regulations in existence to empower responsible and impactful AI innovation.

Industry Challenge
Measures like the proposed EU AI Act present a significant factor that businesses need to understand and navigate.   This landmark piece of legislation seeks to regulate AI applications and ensure their ethical and trustworthy use. While these regulations might appear challenging for organisations by potentially limiting the use of certain data sets and introducing measures like watermarking for AI-generated content, they are not insurmountable obstacles.  Coupled with strict data privacy norms akin to GDPR, these regulations require businesses to adapt and innovate, rather than obstruct progress. Organisations leading the way will need to ensure they have both a voice and a deep understanding of regulatory guardrails.

The ADAPT Solution

Unlock Regulatory Compliance with ADAPT’s Research Tool to Assess Risk
At ADAPT, we recognise the intricacies of data regulation.  To help organisations navigate the various regulations in existence, ADAPT researchers have developed a groundbreaking tool that automates risk assessments across all regulations making it easier to develop AI in the confidence of being compliant.  The risk assessment tool allows organisations to navigate the complexities of data regulation, discover risks that exist and then correlate those risks with regulatory requirements.  

We expect the EU AI Act will have major implications for AI and industry and we are looking forward to exploring this developing area with our industry partners over the coming years.

Domain: the domain or sector or area within which the AI system is or will be deployed; e.g. Health, Education
Purpose: the purpose or end-goal for which the AI system is or will be used to achieve; e.g. Patient Diagnosis, Exam Assessment
Capability: the capability or application for what the AI system is or will be used to provide; e.g. Facial Recognition, Sentiment Analysis
User: the user or operator who is or will be using the AI system; e.g. Doctor, Teacher
Subject: the subject or individual or group which the AI system is or will be used towards; e.g. Patients, Students

Try ADAPT’s Risk Assessment Tool (please note this tool is in continuous development)

What constitutes high-risk AI?
In order to classify the risk level of an AI system, the purpose of the system needs to be examined.  There are two criteria:

AI systems are considered high-risk if they are intended to be used as safety components of products covered by one of the CE standards listed in Annex II of the AI Regulation.

Stand-alone AI systems that are listed in Annex III to the AI Regulation are also high-risk AI systems.  For example, this is the case of AI in the areas of biometric identification, education and training, credit and emergency services, and management and operation of critical infrastructures.  In the Commission’s vier, these systems are particularly likely to interfere with security, health and fundamental rights.

It should be noted that the provider bears the responsibility of correct classification.

Research into the impact of the EU AI Act is continuing. Further information on ADAPT’s research on the EU AI Act can be found here.

Download booklet: Interpreting the EU AI Act for Industry



Delaram Golpayegani
PhD student, School of Computer Science and Statistics, Trinity College Dublin, ADAPT centre
[email protected]

Harshvardhan Pandit
Assistant Professor, School of Computing at Dublin City University, ADAPT Centre
[email protected]

Dave Lewis
Interim Director, ADAPT Centre
Associate Professor, School of Computer Science and Statistics
[email protected]