Our research vision is to pioneer new forms of proactive, scalable, and integrated AI-driven Digital Content Technology that empower individuals and society to engage in digital experiences with control, inclusion, and accountability. Our AI-Driven Digital content technology research contributes to a world where all digital engagements (individual, algorithmic, enterprises, and society) will be rich, language-driven, immersive, personalised digital engagement experiences. While ADAPT research into text, speech, and video processing, and AR and VR, especially in collaboration with industry, will contribute to realising these digital experiences, our pursuit of a Balanced Digital Society focuses our cross-cutting research in machine learning, personalisation, and data management.
In the ADAPT Centre, we have strengthened our core areas of research expertise in Content Analytics, Machine Translation, Personalisation, Multimodal Interaction, Human-Computer Interaction and Data Management.
At ADAPT our interdisciplinary research team incorporates leading experts from the complementary fields of Social Sciences (e.g., psychology, translation and communication studies, and sociology), Communications, Journalism, Business, Finance, Nursing, Medicine, Neuroscience, Gerontology, and Health Informatics, to drive better health and wellbeing outcomes.
The ADAPT research programme expands our ambition to deliver impact against societal challenges and explore how engagement with digital content technology affects key aspects of our lives including our health, wellbeing, finances, consumption, productivity, and our personal autonomy as consumers and citizens online.
Governments and civil society are starting to recognise the need for urgent and concerted action to address the societal impact of the accelerating pace of digital content technologies and the AI techniques that underpin them. To align with this vision and pioneer new forms of proactive, scalable and integrated AI-driven Digital Content Technology, the ADAPT II Research Programme is organised into three Research Strands where each Strand encompasses a portfolio of research challenges and research projects funded from SFI, industry collaborative research, non-exchequer non-commercial (NENC), philanthropic and commercialisation sources. The three research strands are:
From the enterprise and societal perspective, new multi-stakeholder practices and structured knowledge and integration techniques will enable organisations to balance the value and risk of integrating data to offer rich digital experiences. This will inform societies by helping them develop the policies and institutions needed to hold organisations accountable for their choices in this regard. It will deliver data governance models and multi-stakeholder governance practices that reveal and manage the value organisations seek and the values societies expect from AI-driven digital engagement.
From the algorithmic perspective, new machine learning techniques will both enable more users to engage meaningfully with the increasing volumes of content globally in a more measurably effective manner, while ensuring the widest linguistic and cultural inclusion. It will enhance effective, robust integrated machine learning algorithms needed to provide multimodal content experiences with new levels of accuracy, multilingualism, and explainability.
From the individual perspective, research within this Strand will deliver proactive agency techniques that sense, understand and proactively serve the needs of individual users to deliver relevant, contextualized, and immersive multimodal experiences which also offer them meaningful control over the machine agency delivering those experiences.
The pursuit of a single research vision by the three Research Strands is supported by a programme of Cross-Strand research. This collaborative approach of the following two complementary research challenges will address this balance across all Strands. The
To train the next generation of Research Experts, The SFI Centre for Research Training in Digitally-Enhanced Reality (d-real) is an innovative, industry partnered, research training programme that equips PhD students with deep ICT knowledge and skills across Digital Platform Technology, Content and Media Technology and their application in industry sectors. d-real postgraduate students will make research breakthroughs in areas such as multimodal interaction, multimodal digital assistants, multilingual speech processing, real-time multilingual translation and interaction, machine intelligence for video analytics and multimodal personalisation and agency.
Whether via multimodal devices such as smart phones, embedded displays and IoT, or virtual assistants and VR/AR experiences, media technology is revolutionising the way we interact, collaborate and behave. d-real PhD students will develop skills for next generation human-centric media technology, https://d-real.ie/
Competively won R&D funding from the EU enables ADAPT to pursue international collaborations to realise cutting-edge scientific research and develop innovative technologies to tackle societal challenges.
Twitter feed is not available at the moment.