Critical research to identify vulnerabilites in Knowledge Graph Embedding Models accepted at ACL 2021

16 July 2021

Peru Bhardwaj, John Kelleher, Luca Costabello, and Declan O’Sullivan have had their paper, titled ‘Poisoning Knowledge Graph Embeddings via Relation Inference Patterns’, accepted for publication at  ACL 2021. The conference is a leading venue for resarch on Natural Language Processing and will run virtually from 1-6 August 2021.

Knowledge Graph Embedding (KGE) models are increasingly used in high-stake domains like healthcare and finance, and critical research is underway to identify the security vulnerabilities in these models. Data poisoning attacks are methods to identify the vulnerabilities in learning algorithms that could be exploited by an adversary to manipulate the learned model’s behaviour. Such manipulation can lead to unintended model behaviour and failure. In their ACL paper, ADAPT researchers propose a set of data poisoning attacks against KGE models. By proposing these methods, we provide an opportunity to fix the security vulnerabilities of KGE models and protect stakeholders from harm. In this way, the research is directed towards minimizing the negative consequences of deploying the KGE models in our commercial applications.

 

This research work is a part of Peru’s PhD study at Trinity College Dublin and is co-funded by Accenture Labs and ADAPT Centre.