The AIAL is an independent research lab with a mission to ensure that AI technologies work for the public, particularly those at the margins of society, who tend to be disproportionately negatively impacted. We believe rigorous research and empirical evidence are the antidote to most of the current issues plaguing the AI industry, by holding responsible bodies accountable for the adverse consequences, and ushering meaningful transformative change.
There exists widespread enthusiasm from big tech companies, AI vendors, and government bodies alike to integrate AI technologies into decision making processes. This emerges from the belief that doing so accelerates positive social transformation, societal benefits, and human flourishing in the long term. Yet, AI systems are integrated hastily into numerous social sectors, most of the time without rigorous vetting. As a result, AI systems built on social, cultural, and historical data and operating within such a realm tend to diminish fundamental rights, and keep systems of authority and power intact. As a result, they benefit a handful of corporations, and exacerbate and widen inequity, rather than contribute to societal benefits, positive transformation, and human flourishing. We believe rigorous research and empirical evidence are the antidote to most of the current issues plaguing the AI industry, by holding responsible bodies accountable for the adverse consequences, and ushering meaningful transformative change.
The AIAL is an independent research lab with a mission to ensure that AI technologies work for the public, particularly those at the margins of society, who tend to be disproportionately negatively impacted. Research and product development in AI currently benefits a few powerful actors, reinforces systems of power, and exacerbates and widens inequities. The AIAL is dedicated to ensuring that the wider AI ecology — from research and product development, to regulation — centres public interest, particularly, the most marginalised and disenfranchised in society.
Research excellence and technical rigour are paramount to us. As is research with practical implications that serve the disproportionately negatively impacted. Thus, we partner and collaborate with research centres, civil society, and rights groups across the globe. These collaborations, active conversations, and allyship will give our work the necessary weight and inertia to advance the laboratory’s central mission of asserting rights. Driven by concerns that affect the most marginalised, we strive to uncover, document, and study AI technologies that pervade society in order to:
● challenge and dismantle harmful technologies; ● inform evidence-driven policies;
● hold responsible bodies accountable; and ● pave the way for a future marked by just and equitable AI.
Research project/Challenge The AIAL (AI Accountability Lab) is seeking to appoint a Postdoctoral Researcher to work on developing a justice-oriented audit framework synthesising computational methods, theories of justice, and existing regulations to premeditatively focus audits towards meaningful accountability. The goal of the framework is to provide audit practitioners with practical tools such as guiding questions and rubrics that shape perspectives towards rigorous justice oriented audits.
The position corresponds to work in one or more of the following areas:
a. Mapping accountability mechanisms and governance structures and their alignments with fundamental rights and freedoms and legal frameworks. b. Challenging existing accountability mechanisms that do not consider or sufficiently address social inequalities, power and resource asymmetries. c. Developing new methods for ensuring accountability beyond technical and organisational considerations that provide empirical evidence for holding stakeholders accountable for AI development, provision, and deployments.
2. Auditing
a. Developing audit methodologies for specific stages in the AI lifecycle focused on ensuring justice, accountability, and transparency beyond merely satisfying legal requirements. b. Development of verifiable, replicable, and reproducible design methodologies and frameworks and using these in the execution of audits. c. Developing audit tools and frameworks to evaluate AI development and deployments with a specific focus on risk and harm mitigations beyond technical and organisational issues.
3. Dissemination
a. Actively work with numerous audit practitioners, researchers, and civil society and rights groups to develop, refine, and disseminate the developed work. b. Participate in policy-making to shape AI accountability and auditing processes. c. Publish findings and participate in discussions to promote accountability and responsibilities concerning AI.
The appointment of the Postdoctoral Researcher will contribute to shaping best practices for AI systems evaluation and risk mitigation, ensuring compliance with emerging regulations, and supporting both public sector stakeholders in fostering a culture of accountability.
Duties and Responsibilities
• Manage and conduct the research under the leadership of the Principal Investigator
• Disseminating outcomes of research through project reports as well as peer-reviewed academic publications, technical reports, industry and public events, and other channels. ● Assist in the further development of research potential by pursuing external funding. ● Where relevant to the research topic, and within the bounds of TCD policies, assist in supervising students, interns, and other personnel. ● Organising meetings and workshops with civil societies, NGOs, and other stakeholders in context of the project. ● Administrative and management tasks associated with the project. ● Carry out any other duties within the scope and purpose of the job as requested by the PI. ● Comply with all TCD policies and regulations, including those in relation to Research Ethics and Health and Safety.
Qualifications, Skills and Experience Required The ideal candidate will demonstrate the appropriate mix of knowledge, experience, skills, talent and abilities as outlined below:
Knowledge and Experience ● A PhD in a relevant discipline such as Machine Learning; AI; Statistics; Critical data, information, communication, and/or media studies, or equivalent (essential) ● Strong knowledge and experience regarding AI development, testing, and evaluation (essential) ● Strong knowledge and experience with developing and implementing AI audits (essential) ● Strong knowledge of the AI auditing landscape (essential) ● In-depth knowledge of theories of structural injustice, inequity, and power asymmetry (essential) ● Knowledge or awareness of legal frameworks such as AI Act, DSA, and GDPR privacy and data protection regulations such as GDPR (ideal). ● Experience working with civil society and rights groups involved in auditing and policy- making (ideal) ● Evidence of a research profile and publication record (essential). ● Knowledge of research techniques and methodologies (essential).
Skills, talents & abilities ● Capability for demonstrable high quality research (essential). ● Experience working on collaborative projects (ideal). ● Experience organising stakeholder discussions and workshops (ideal) ● Experience working on individual and collaborative funding proposals (ideal). ● Experience working in teams with diverse and multi-disciplinary cohorts (ideal). ● Demonstrable leadership and willingness to support others (ideal).
Salary Scale: Postdoctoral Researcher – Point PD1.1 €45,847 to Point PD2.4 €58,479 as per the IUA payscales. Appointment will be commensurate with qualifications and experience and in line with current Government pay policy.
Closing date: 30/04/2025
Application Process Interested candidates can submit their application by emailing [email protected] with the subject line: AIAL Audit Framework Postdoc – Application
Applications must include: ● Cover Letter ● CV
Informal enquiries about the role can also be sent to the email [email protected] with the subject line “AIAL Policy Translation Postdoc – Enquiry”