New Research Can Help Mitigate Pre-Adolescent Cyberbullying on Online Social Networks

27 September 2022

With the vast majority of us spending hours upon hours of time online, cyberbullying among young people has become more concerning.  New research emerging from the cross-disciplinary collaboration between SFI ADAPT Centre and DCU Anti Bullying Centre, claims to be the first of its kind aimed at making Artificially Intelligent (AI) moderation tools be more efficient in detecting and removing social media updates, which inflict cyberbullying among pre-adolescents. The research was conducted in collaboration with experts from DCU Anti-Bullying Centre and TransPerfect, and was published by Cambridge University Press and authored by Prof Brian Davis, Prof Maja Popović, Dr Tijana Milosevic and Kanishk Verma

For an AI-moderation tool to efficiently and effectively detect or identify whether a social media post or comment qualifies as cyberbullying, it needs to be trained with large amounts of fine-grained annotated data, which is costly and ethically challenging to produce. Moreover, in cases where fine-grained datasets do exist, they may be unavailable in the target language. As manual translation is costly and expensive, this new research can offer a work around. The study proposes leveraging state-of-the-art machine translation (MT) to automatically translate a pre-adolescent cyberbullying dataset, which can then be used to train the AI-moderator and thereby increase efficiency.  

A number of factors must be taken into account by an AI-moderator to accurately identify cyberbullying social media updates, including the identification of cyberbullying roles and forms. This study presents a first-of-its-kind experiment in leveraging MT to translate a unique pre-adolescent cyberbullying gold standard dataset in Italian with fine-grained annotations into English for training and testing a native binary classifier for pre-adolescent cyberbullying. In addition to contributing high-quality English reference translation of the source gold standard, the proposed experiments indicate that the performance of our target binary classifier when trained on machine-translated English output is on par with the source (Italian) classifier.

Due to the COVID-19 pandemic and with schools moving towards online education, online social networks (OSNs) have become an integral channel for day-to-day communication in the lives of many pre-adolescent children. Recent transparency reports shared by OSN companies such as Meta, have revealed a surge in cyber-bullying related incidents. Although such reports by OSNs revealed that AI-moderation tools have enabled companies to tackle 44% of teen/pre-teen cyberbullying, a lot of them still require human moderation. This new research can make the AI-moderation tools more efficient, helping move towards solutions that may require lesser need for human intervention in the future. 

In principle, cyberbullying comprises antisocial online behaviour including flaming, harassment, denigration, masquerading, cyberstalking and so on. However, various new forms of cyberbullying are being reported every day, and preadolescent children are not equipped with ways to deal with such negative experiences growing up. Even as OSNs and researchers are working hard to help AI-moderation tools be more efficient in helping mitigate this, acquiring datasets to train such models comes with its own set of challenges including but not limited to consent and assent, maintaining privacy, vulnerability, confidentiality, etc. In the given scenario, research such as this can be ground-breaking in making a real difference to both the technological advancement in this area, and the lives of children being affected by such harmful online experiences every day.