An Explainable Artificial Intelligence Model for Detecting Xenophobic Tweets
Xenophobia is a social and political behavior that has been present in our societies since the beginning of humanity. The feeling of hatred, fear, or resentment is present before people from different communities from ours. With the rise of social networks like Twitter, hate speeches were swift beca...
Gespeichert in:
Veröffentlicht in: | Applied sciences 2021-11, Vol.11 (22), p.10801 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Xenophobia is a social and political behavior that has been present in our societies since the beginning of humanity. The feeling of hatred, fear, or resentment is present before people from different communities from ours. With the rise of social networks like Twitter, hate speeches were swift because of the pseudo feeling of anonymity that these platforms provide. Sometimes this violent behavior on social networks that begins as threats or insults to third parties breaks the Internet barriers to become an act of real physical violence. Hence, this proposal aims to correctly classify xenophobic posts on social networks, specifically on Twitter. In addition, we collected a xenophobic tweets database from which we also extracted new features by using a Natural Language Processing (NLP) approach. Then, we provide an Explainable Artificial Intelligence (XAI) model, allowing us to understand better why a post is considered xenophobic. Consequently, we provide a set of contrast patterns describing xenophobic tweets, which could help decision-makers prevent acts of violence caused by xenophobic posts on Twitter. Finally, our interpretable results based on our new feature representation approach jointly with a contrast pattern-based classifier obtain similar classification results than other feature representations jointly with prominent machine learning classifiers, which are not easy to understand by an expert in the application area. |
---|---|
ISSN: | 2076-3417 2076-3417 |
DOI: | 10.3390/app112210801 |