Extracting LSA topics as features for text classifiers across different knowledge domains

The incorporation of unstructured text data in predictive models typically involves a pre-processing step, where features related to topical and opinion content are generated. Such features are extracted from collections of documents that can span more than one knowledge domains. Focusing on latent...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Quality & quantity 2020-02, Vol.54 (1), p.249-261
Hauptverfasser: Evangelopoulos, Nicholas, Amirkiaee, S. Yasaman
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The incorporation of unstructured text data in predictive models typically involves a pre-processing step, where features related to topical and opinion content are generated. Such features are extracted from collections of documents that can span more than one knowledge domains. Focusing on latent semantic analysis as the topic extraction method, in this paper we present some methodological aspects of this feature extraction process using a study of published research in information systems and operations management as an illustration. Our results indicate that, classifiers that use unified composite topics, extracted from document collections that span multiple domains and may seem less intuitive to human domain experts, tend to outperform classifiers that use topics extracted separately from isolated domains. In addition, in order to avoid overfitting, a surprisingly low number of topics may be preferable.
ISSN:0033-5177
1573-7845
DOI:10.1007/s11135-019-00954-x