Topic Modeling as Multi-Objective Contrastive Optimization
Recent representation learning approaches enhance neural topic models by optimizing the weighted linear combination of the evidence lower bound (ELBO) of the log-likelihood and the contrastive learning objective that contrasts pairs of input documents. However, document-level contrastive learning mi...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent representation learning approaches enhance neural topic models by
optimizing the weighted linear combination of the evidence lower bound (ELBO)
of the log-likelihood and the contrastive learning objective that contrasts
pairs of input documents. However, document-level contrastive learning might
capture low-level mutual information, such as word ratio, which disturbs topic
modeling. Moreover, there is a potential conflict between the ELBO loss that
memorizes input details for better reconstruction quality, and the contrastive
loss which attempts to learn topic representations that generalize among input
documents. To address these issues, we first introduce a novel contrastive
learning method oriented towards sets of topic vectors to capture useful
semantics that are shared among a set of input documents. Secondly, we
explicitly cast contrastive topic modeling as a gradient-based multi-objective
optimization problem, with the goal of achieving a Pareto stationary solution
that balances the trade-off between the ELBO and the contrastive objective.
Extensive experiments demonstrate that our framework consistently produces
higher-performing neural topic models in terms of topic coherence, topic
diversity, and downstream performance. |
---|---|
DOI: | 10.48550/arxiv.2402.07577 |