Equivariant Contrastive Learning for Sequential Recommendation
Contrastive learning (CL) benefits the training of sequential recommendation models with informative self-supervision signals. Existing solutions apply general sequential data augmentation strategies to generate positive pairs and encourage their representations to be invariant. However, due to the...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Contrastive learning (CL) benefits the training of sequential recommendation
models with informative self-supervision signals. Existing solutions apply
general sequential data augmentation strategies to generate positive pairs and
encourage their representations to be invariant. However, due to the inherent
properties of user behavior sequences, some augmentation strategies, such as
item substitution, can lead to changes in user intent. Learning
indiscriminately invariant representations for all augmentation strategies
might be suboptimal. Therefore, we propose Equivariant Contrastive Learning for
Sequential Recommendation (ECL-SR), which endows SR models with great
discriminative power, making the learned user behavior representations
sensitive to invasive augmentations (e.g., item substitution) and insensitive
to mild augmentations (e.g., featurelevel dropout masking). In detail, we use
the conditional discriminator to capture differences in behavior due to item
substitution, which encourages the user behavior encoder to be equivariant to
invasive augmentations. Comprehensive experiments on four benchmark datasets
show that the proposed ECL-SR framework achieves competitive performance
compared to state-of-the-art SR models. The source code is available at
https://github.com/Tokkiu/ECL. |
---|---|
DOI: | 10.48550/arxiv.2211.05290 |