New episode of ADAPT Radio’s ‘HumanAIse’ focuses on disinformation and the harmful uses of AI

ADAPT Radio’s HumanAIse series continues this month as we hear how experts are working hard to keep up with AI, and how education and awareness are vital to protecting people from harmful information online.

These days it’s hard to know what to believe. Misinformation and disinformation are nothing new but AI and social media seem to be driving false information to new levels. Our experts discussing this topic in this month’s episode are Associate Professor at Dublin City University, Professor Jane Suiter and Assistant Professor in the School of Information and Communication Studies in University College Dublin, Dr Brendan Spillane

Prof. Suiter’s research has a focus on the public sphere and in particular on scaling up deliberation and disinformation. She is also leading a new project on countering COVID-19 disinformation and the potential role of deliberation. Dr. Spillane’s PhD investigated the impact of the visual presentation of news on the perception of bias in news articles. He is also the PI of a new Horizon Europe Innovation Action project called VIGILANT. It is a 3-year, €4m project with 18 partners that will equip European Police Authorities with advanced technologies from academia to detect and analyse disinformation campaigns that lead to criminal activities.

Dr. Brendan Spillane kicks off the podcast explaining that the difference between misinformation and disinformation is whether or not there is intent behind it. He outlines misinformation as being standard false information while disinformation is false information spread with an agenda to change behaviour. Dr. Spillane also discusses “mal-information”, a third category that outlines factually incorrect information which may be unsuitable for the public domain or may be private to an individual. For example, a home address or medical records of a politician’s spouse or sibling – information that is spread with mal-intent to try and embarrass an individual or group in the hope to influence behaviour in society.

Prof. Jane Suiter outlines how artificial intelligence has contributed to the rise of information that can be harmful to people. One such example is the concerning rise in sophisticated fake videos, images and audio that contributes to disinformation being produced by AI tools. Dr. Spillane also references how the volume of such content can also contribute to people believing a piece of information that may not be true. For example, viewing a piece of information again and again may reinforce that fact in an individual’s mind and ground it in truth.

Dr. Spillane, as PI of VIGILANT, provides an overview of the project and how it relates to managing the spread of disinformation. VIGILANT is a Horizon Europe project that will equip European Police Authorities with advanced technologies from academia to detect and analyse disinformation campaigns that lead to criminal activities. Essentially, the project is taking existing technologies from academia, combining them together in a single platform and making this available to police authorities. He also highlights how the project is about building the architecture in such a way that a group can equip what is needed as technology evolves. For example, if a better image detection tool is available, it can be added to the system with ease.

Prof. Suiter also outlines her project, Provenance, a multimillion interdisciplinary project to combat disinformation and PI on JOLT a Marie Curie ITN on harnessing digital technologies in communication. Provenance focuses on the detection of different types of media manipulation and warns individuals if an image is manipulated. The team are also working on individual level countermeasures that you can take, such as equipping individuals with knowledge of the tools to resist disinformation and source reliable information.

Both experts highlight the importance as individuals of increasing our own awareness of disinformation to prevent ourselves from being harmed. According to Prof. Suiter, the best way to generate misperceptions is to trigger emotions and if we find ourselves emotionally triggered by information online we should stop and review the source before sharing the information. Dr. Spillane also references the concern as to what is going to be the next stage of disinformation and how society will respond to it. He makes the point that no one is completely immune from spreading disinformation and we should always be aware when viewing information online and consider who may stand to benefit from the spread of this disinformation.

For further insight into the interactions of humans and AI, catch HumanAIse on SoundCloud, iTunes, Spotify, and Google Podcasts.

ADAPT Radio: HumanAIse is ADAPT’s newest podcast series providing an in-depth look at the future of AI, automation and the implications of entrusting machines with our most sensitive information and decisions.