Towards Better Understanding of Contrastive Sentence Representation Learning: A Unified Paradigm for Gradient
Sentence Representation Learning (SRL) is a crucial task in Natural Language Processing (NLP), where contrastive Self-Supervised Learning (SSL) is currently a mainstream approach. However, the reasons behind its remarkable effectiveness remain unclear. Specifically, many studies have investigated th...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sentence Representation Learning (SRL) is a crucial task in Natural Language
Processing (NLP), where contrastive Self-Supervised Learning (SSL) is currently
a mainstream approach. However, the reasons behind its remarkable effectiveness
remain unclear. Specifically, many studies have investigated the similarities
between contrastive and non-contrastive SSL from a theoretical perspective.
Such similarities can be verified in classification tasks, where the two
approaches achieve comparable performance. But in ranking tasks (i.e., Semantic
Textual Similarity (STS) in SRL), contrastive SSL significantly outperforms
non-contrastive SSL. Therefore, two questions arise: First, *what commonalities
enable various contrastive losses to achieve superior performance in STS?*
Second, *how can we make non-contrastive SSL also effective in STS?* To address
these questions, we start from the perspective of gradients and discover that
four effective contrastive losses can be integrated into a unified paradigm,
which depends on three components: the **Gradient Dissipation**, the
**Weight**, and the **Ratio**. Then, we conduct an in-depth analysis of the
roles these components play in optimization and experimentally demonstrate
their significance for model performance. Finally, by adjusting these
components, we enable non-contrastive SSL to achieve outstanding performance in
STS. |
---|---|
DOI: | 10.48550/arxiv.2402.18281 |