Discovering child sexual abuse material creators' behaviors and preferences on the dark web

Producing, distributing or discussing child sexual abuse materials (CSAM) is often committed through the dark web to stay hidden from search engines and to evade detection by law enforcement agencies. Additionally, on the dark web, the CSAM creators employ various techniques to avoid detection and c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Child abuse & neglect 2024-01, Vol.147, p.106558-106558, Article 106558
Hauptverfasser: Ngo, Vuong M, Gajula, Rahul, Thorpe, Christina, Mckeever, Susan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Producing, distributing or discussing child sexual abuse materials (CSAM) is often committed through the dark web to stay hidden from search engines and to evade detection by law enforcement agencies. Additionally, on the dark web, the CSAM creators employ various techniques to avoid detection and conceal their activities. The large volume of CSAM on the dark web presents a global social problem and poses a significant challenge for helplines, hotlines and law enforcement agencies. Identifying CSAM discussions on the dark web and uncovering associated metadata insights into characteristics, behaviors and motivation of CSAM creators. We have conducted an analysis of more than 353,000 posts generated by 35,400 distinct users and written in 118 different languages across eight dark web forums in 2022. Out of these, approximately 221,000 posts were written in English and contributed by around 29,500 unique users. We propose a CSAM detection intelligence system. The system uses a manually labeled dataset to train, evaluate and select an efficient CSAM classification model. Once we identify CSAM creators and victims through CSAM posts on the dark web, we proceed to analyse, visualize and uncover information concerning the behaviors of CSAM creators and victims. The CSAM classifier, based on Support Vector Machine model, exhibited good performance, achieving the highest precision of 92.3 % and accuracy of 87.6 %. While, the Naive Bayes combination is the best in term of recall, achieving 89 %. Across the eight forums in 2022, our Support Vector Machine model detected around 63,000 English CSAM posts and identified near 10,500 English CSAM creators. The analysis of metadata of CSAM posts revealed meaningful information about CSAM creators, their victims and social media platforms they used. This included: (1) The topics of interest and the preferred social media platforms for the 20 most active CSAM creators (For example, two top creators were interested in topics like video, webcam and general content in forums, and they frequently used platforms like Omegle and Skype); (2) Information about the ages and nationalities of the victims typically mentioned by CSAM creators, such as victims aged 12 and 13 with nationalities including British and Russian; (3) social media platforms preferred by CSAM creators for sharing or uploading CSAM, include Omegle, YouTube, Skype, Instagram and Telegram. Our CSAM detection system exhibits high performance in precision, recall, an
ISSN:0145-2134
1873-7757
DOI:10.1016/j.chiabu.2023.106558