The ADAPT Centre, in partnership with IDA Ireland, recently held a breakfast briefing on Interpreting the AI Act for Industry at the IDA offices in Dublin. The briefing took place Friday 13th Oct and aimed to decode the EU AI Act and provide an understanding of its impact on industry for businesses.
ADAPT Interim Director and Head of Artificial Intelligence at Trinity College Dublin, Professor Dave Lewis, provided an introduction to the EU AI Act as well as highlighting some of the issues that lie ahead that businesses should be made aware of. The EU AI Act is part of a new legislative framework for product health and safety harmonisation across a single market and aims to enable access to the EU single market for AI products and services while protecting health, safety and fundamental rights. This would apply a risk-based approach to regulating AI within the EU and require businesses to apply for product certification for high-risk AI systems.
As Prof. Lewis outlines, the EU AI Act is currently in trilogue negotiations and he expects, if things go smoothly, that a final version of the text may be available as early as the end of this year but is not expected to come into full force for another couple of years. It is expected that the final version will leave room for interpretation with clear precedents being set only when the law is enforced. A national AI strategy is under development and the NSAI have produced a document titled AI Standards and Assurance Roadmap that was published this year. ADAPT research is also actively involved in analysing the act and making it available in an open and accessible way. ADAPT researchers have developed a tool to provide assistance in interpreting the act and making it more accessible. Encouraging multi-stakeholder engagement in this area is EMPOWER, a collection of existing SFI researchers, including ADAPT, with communities of practice encouraging industry and academics to come together to discuss issues of standards.
Dr. Harshvardhan Pandit (Dublin City University) presented on the topic of RegTech: GDPR & AI Act which detailed risk modelling and aspects of regulation from a practical point of view. He discusses the research challenges that ADAPT are addressing in this landscape of regulatory compliance:
“What we work at in ADAPT is how do we automate this risk assessment and how do we help you discover what risks apply regardless of what you are doing and then correlate those risks with what the regulatory requirements are.”
The risk assessment tool is supported by AI and is currently still a work in progress. It uses semantic web technologies to analyse the draft text of the AI Act and help users determine if their use case falls into a high-risk category with these higher risk categories bringing tighter controls. Dr. Pandit also emphasises the importance of building in regulation compliance from the very beginning:
“We are seeing increasing standards for how to represent information in a machine readable way… the software snapshot that we have for the AI Act, all of that is built on open standards so you express the law as code, or code as law, so when you are building systems they are compliant by design.”
The briefing also featured a panel discussion with experts Peter Bolger, EY Partner in Law, Aditya Mohan, NSAI Standards Officer, and Prof. Lewis, chaired by Jonathan McCrea, broadcaster and host of Newstalk’s Futureproof. The panel advised businesses to start thinking about their compliance requirements now. EY lawyer, Peter Bolger, recommended putting a policy in place now to control how your staff uses AI while Prof. Lewis recommended going on an ISO 9000 course as part of a strategy for preparing for the AI Act as knowledge of quality management systems will be imperative in interpreting and applying the AI Act once launched.
Prof. Lewis highlights a general concern around regulation of AI and if we will have enough bodies in government with the right level of expertise to review the upcoming submissions once the Act is in full force. He notes that a large amount of it will be self-certification and warns businesses that they should be preparing to take responsibility:
“If you are the body providing the service that falls on you. It is in your interest to make sure that you are doing those checks with your suppliers. There may be interaction with the Liability Directive but as it stands under the AI act at the moment, you are taking that responsibility and it won’t be an excuse if you didn’t ask them.”
Aditya Mohan, NSAI Standards Officer, also advises businesses to start the dialogue with their suppliers on how they may be using AI and should know the origins of the technology they are using, e.g. LLMS (large language models), location of servers, etc. Peter Bolger notes that AI clauses are already being included in contracts. Ultimately, businesses are advised to pay attention to how AI will be defined under the AI Act and ensure they are complying with regulations.
More information on ADAPT’s Research on Open Resources and Tools can be found here:
You can access a recording of this event here.