Personalising the user experience

How personalising the user experience can help prevent the spread of disinformation – Must read interview with Professor Owen Conlan about the EU Project Provenance


Professor Owen Conlan is a researcher and educator at Trinity College Dublin. He guides the technical development of the Provenance Horizon 2020 project and he also leads the Digitally Enhanced Engagement strand at the large-scale ADAPT research centre (, which focuses on research on digital content technology and how people interact with it. Owen is internationally recognised and well cited for his work in personalisation, which has developed novel techniques in artificial intelligence, knowledge engineering and visual data analytics to deliver new understandings of how users interact with rich information experiences. He has contributed to over 180 publications, mentored postdoctoral fellows and supervised PhD and MSc students.

How is ADAPT expertise concretely brought into the Provenance project’s consortium?

ADAPT is a research centre funded by Science Foundation Ireland that encompasses experts from several Irish universities. ADAPT focuses on research in digital content technology and has a large amount of user-facing research. I am based in Trinity College Dublin and I lead the Digitally Enhanced Engagement strand within ADAPT. ADAPT expertise is brought into Provenance in a number of areas. There is the research expertise in personalisation, an area we have been researching in Trinity College for over 25 years; there is also the design support to ensure that the user interface is created with best practice in mind; and there is innovation support to help ensure that Provenance achieves its maximum impact in society. ADAPT also offers all important management and coordination support.

How will the digitalised companion/plug-in be developed? How will it concretely help people analyse and evaluate content? Is the plug-in going to be updated so it can meet the evolving challenges posed in the digital context?

The personalised digital companion and plug-in are designed to offer users in situ support when they interact with news content on the web. They have been designed to act in an anonymous manner, while also offering a personalised experience. All data gathered about the user’s interactions with the content remain on their own computer within the plug-in, and are also abstracted away from discrete actions to broad areas of interest.

This model of the user also becomes more abstract with time, so certain interests blur into higher categories. For example, you may show a lot of interest in vaccines and vaccine science, but over time, if this interest is not maintained, it will blur into an interest in health sciences in general. The personalised digital companion can ask the plug-in questions to help identify how it might offer support to the user, specifically in the form of media literacy skills. It does this in a transactional manner and does not retain any information about a specific user. So, concretely, the personalised digital companion and plugin offer users information about the online news information they are interacting with and offers guidance tailored to their level of expertise in that material, on how to understand the content they are presented with.

The plugin adapts to the evolving context through the knowledge gathered about potentially problematic content. This is most evident when the user is interacting with content from a domain about which they have little experience (note: users can update their locally stored models to indicate if they have more experience with a topic than has been observed).

Your main research field is ‘personalising the user experience. What are the main approaches and frameworks related to the user-content interaction that are being researched? Do these approaches work for different content/topics?

I am a strong advocate of user control in personalisation. The techniques I have worked on often lean heavily towards user models that can be scrutinised and modified. This has led to the development of a number of approaches and frameworks by my team, such as APeLS, VisEN, NABS and OPEL. These frameworks have been applied in a variety of contexts (e.g. technology enhanced learning and delivery of notifications) but have been created with the basic tenet of user control at their core. By ensuring that user models are semantically meaningful and can be visualised, the ability of users to control their interaction with content is maximised.

Can you tell us a bit more about how personalising the user experience can help prevent the spread of disinformation (in Provenance & beyond)?

Personalisation offers the promise of tailoring how content is delivered to a user. I must point out that we are not altering what a user sees in their interaction with news (i.e. we are not adding or removing items from their news feeds or from websites that they interact with). Rather we are supplementing these items with information about the articles and their provenance (i.e. where else we have seen similar content) either textually or semantically. This additional information, in itself, is not personalisation. Where personalised support kicks in is in offering media literacy skills development support that is tailored to the current context to ensure the user is establishing transferrable skills in how they examine content. This is important, as users are an exceptionally adaptable and intelligent element of this ‘socio-technical’ system. Leveraging personalisation to upskill the user ensures that they will bring those skills into other contexts where Provenance may not be available, e.g. print media.

How is the impact of ‘personalising the user experience’ measured?

There are a number of ways of measure this form of research. One is to examine the propensity of users to change their behaviour around interactions with content to see if the personalisation had an impact on their decisions. Another commonly used method is to perform deep follow-ups with evaluation participants to see how interventions shaped their thinking around the content they interacted with.

You are probably aware of the News Provenance Project (NPP) launched by the research and development department of the New York Times. This project tries to counter misinformation on the Internet by focusing on visual journalism and by implementing a solution to help users to better assess online information. Why do you think this common approach could work? What are the critical elements to make it effective?

Yes, I am familiar with this work. There are some similarities, but also some differences to our Horizon 2020 Provenance project. One key difference is NPP’s focus on photographs. This is certainly an element of Provenance, but we also have a strong interest in the textual content and in examining what features of such content make it more or less trustworthy. When combined with our knowledge about the provenance of images, we can form a more holistic perspective of a news article. We also have a strong focus on improving people’s media literacy skills to ensure they can develop self-reflective skills in understanding their own behaviour, in order to bring what they have learned to other contexts that may not be digital.

Originally published in

A scientist’s opinion: Interview with Professor Owen Conlan about the EU Project Provenance