Topics in the Haystack: Extracting and Evaluating Topics beyond Coherence
Extracting and identifying latent topics in large text corpora has gained increasing importance in Natural Language Processing (NLP). Most models, whether probabilistic models similar to Latent Dirichlet Allocation (LDA) or neural topic models, follow the same underlying approach of topic interpreta...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Extracting and identifying latent topics in large text corpora has gained
increasing importance in Natural Language Processing (NLP). Most models,
whether probabilistic models similar to Latent Dirichlet Allocation (LDA) or
neural topic models, follow the same underlying approach of topic
interpretability and topic extraction. We propose a method that incorporates a
deeper understanding of both sentence and document themes, and goes beyond
simply analyzing word frequencies in the data. This allows our model to detect
latent topics that may include uncommon words or neologisms, as well as words
not present in the documents themselves. Additionally, we propose several new
evaluation metrics based on intruder words and similarity measures in the
semantic space. We present correlation coefficients with human identification
of intruder words and achieve near-human level results at the word-intrusion
task. We demonstrate the competitive performance of our method with a large
benchmark study, and achieve superior results compared to state-of-the-art
topic modeling and document clustering models. |
---|---|
DOI: | 10.48550/arxiv.2303.17324 |