A quantitative argumentation-based Automated eXplainable Decision System for fake news detection on social media

Social media is flooded with rumors, which make fake news detection a pressing problem. Many black-box approaches have been proposed to automatically predict the veracity of claims. These methods are lack of interpretability. Thus, we propose a Quantitative Argumentation-based Automated eXplainable...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Knowledge-based systems 2022-04, Vol.242, p.108378, Article 108378
Hauptverfasser: Chi, Haixiao, Liao, Beishui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Social media is flooded with rumors, which make fake news detection a pressing problem. Many black-box approaches have been proposed to automatically predict the veracity of claims. These methods are lack of interpretability. Thus, we propose a Quantitative Argumentation-based Automated eXplainable Decision-making System (QA-AXDS) to tackle this problem and provide users with explanations about the results. The system is fully data-driven in its processes, which allows our models to make greater use of data and be more automatic and scalable than other quantitative framework models. In terms of interpretability, the system can automatically acquire human-level knowledge, and interact with users in the form of dialog trees through explanatory models, thus helping them understand the internal reasoning process of the system. The experimental results show that our system has better transparency and interpretability than other approaches based on the pure machine learning methods, and performs competitively in accuracy among others. In addition, the explanation model provides a way to improve the algorithms when some problems are identified by checking the explanations.
ISSN:0950-7051
1872-7409
DOI:10.1016/j.knosys.2022.108378