Augmenting Social Bot Detection with Crowd-Generated Labels
Social media platforms are facing increasing numbers of cyber-adversaries seeking to manipulate online discourse by using social bots to help automate and scale their attacks. Likewise, some social media users have developed capabilities to identify social bot activity at varying degrees of confiden...
Gespeichert in:
Veröffentlicht in: | Information systems research 2023-06, Vol.34 (2), p.487-507 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Social media platforms are facing increasing numbers of cyber-adversaries seeking to manipulate online discourse by using social bots to help automate and scale their attacks. Likewise, some social media users have developed capabilities to identify social bot activity at varying degrees of confidence. We exploit this user intelligence to augment traditional bot detection systems. Furthermore, not all crowd-generated labels are of equal value or credibility. Some individuals are quite adept at identifying social bot activity, whereas others may become merely suspicious but remain uncertain. We design a system inspired by speech act theory to evaluate which crowd-generated labels are most credible for augmenting bot detection system efficacy.
Social media platforms are facing increasing numbers of cyber-adversaries seeking to manipulate online discourse by using social bots (i.e., social media software robots) to help automate and scale their attacks. Likewise, some social media users can identify social bot activity at varying degrees of confidence. In this research, human reactions to social bot messages are used to augment existing social bot detection capabilities. Speech act theory is used to inspire a framework for assessing the credibility of instances where users identify potential bot activity, as not all user responses are of equal credibility for assisting with the bot detection task. The framework is then operationalized through deep learning methodologies to develop a computational system for identifying social bots. Real-world performance and practicality of the developed framework is demonstrated on a live, crowd-sourced data set collected from a real-world social media platform. Results show that consideration of crowd reactions to suspected bots can significantly improve bot detection performance. Furthermore, consideration of speech acts to evaluate crowd reactions can even further augment the system’s performance, although speech acts themselves are not necessary to observe performance boost through crowd intelligence. This study serves as a grounding point for future work that can explore an augmented model for detecting other forms of algorithmically generated content within social media platforms.
History:
Olivia Sheng, Senior Editor; Jesse Bockstedt, Associate Editor.
Supplemental Material:
The online appendix is available at
https://doi.org/10.1287/isre.2022.1136
. |
---|---|
ISSN: | 1047-7047 1526-5536 |
DOI: | 10.1287/isre.2022.1136 |