Cross-Category Defect Discovery from Online Reviews: Supplementing Sentiment with Category-Specific Semantics
Online reviews contain many vital insights for quality management, but the volume of content makes identifying defect-related discussion difficult. This paper critically assesses multiple approaches for detecting defect-related discussion, ranging from out-of-the-box sentiment analyses to supervised...
Gespeichert in:
Veröffentlicht in: | Information systems frontiers 2022-08, Vol.24 (4), p.1265-1285 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Online reviews contain many vital insights for quality management, but the volume of content makes identifying defect-related discussion difficult. This paper critically assesses multiple approaches for detecting defect-related discussion, ranging from out-of-the-box sentiment analyses to supervised and unsupervised machine-learned defect terms. We examine reviews from 25 product and service categories to assess each method’s performance. We examine each approach across the broad cross-section of categories as well as when tailored to a singular category of study. Surprisingly, we found that negative sentiment was often a poor predictor of defect-related discussion. Terms generated with unsupervised topic modeling tended to correspond to generic product discussions rather than defect-related discussion. Supervised learning techniques outperformed the other text analytic techniques in our cross-category analysis, and they were especially effective when confined to a single category of study. Our work suggests a need for category-specific text analyses to take full advantage of consumer-driven quality intelligence. |
---|---|
ISSN: | 1387-3326 1572-9419 |
DOI: | 10.1007/s10796-021-10122-y |