Sequential targeting: A continual learning approach for data imbalance in text classification
•Handling data imbalance in text data with continual learning.•Detecting sexual harassment and sentiment in comments of social media platforms.•Effective trade-off between precision and recall for an overall increase in F1-score.•Results show 56.38 %p increase on the IMDB dataset.•Results show 16.89...
Gespeichert in:
Veröffentlicht in: | Expert systems with applications 2021-10, Vol.179, p.115067, Article 115067 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Handling data imbalance in text data with continual learning.•Detecting sexual harassment and sentiment in comments of social media platforms.•Effective trade-off between precision and recall for an overall increase in F1-score.•Results show 56.38 %p increase on the IMDB dataset.•Results show 16.89 %p and 34.76 %p increase on NAVER Dataset.
Text classification has numerous use cases including sentiment analysis, spam detection, document classification, hate speech detection, etc. In realistic settings, classification on text data confronts imbalanced data conditions where classes of interest usually compose a minor fraction. Deep neural networks used for text classification, such as recurrent neural networks and transformer networks, suffer from a lack of efficient methods addressing imbalanced data. Traditional data-level methods attempting to mitigate distributional skew include oversampling and undersampling. The oversampling methods destruct the quality of original language representation of the sparse data coming from minority classes whereas the undersampling methods fail to fully utilize the rich context of majority classes. We address such issues in data-driven approaches by enforcing continual learning on imbalanced data by partitioning the training data distribution into mutually exclusive subsets and performing continual learning, treating the individual subsets as distinct tasks. We demonstrate the effectiveness of our method through experiments on the IMDB dataset and constructed datasets from real-world data. The experimental results show that the proposed method improves by 56.38 %p on the IMDB dataset and by 16.89 %p and 34.76 %p on the constructed datasets compared to the baseline method in terms of the F1-score metric. |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2021.115067 |