MoCo4SRec: A momentum contrastive learning framework for sequential recommendation
Sequential recommendation (SR) is an essential component of modern recommender systems. It models the dynamic interest of users based on their sequential interactions. Recently, several studies have utilized sequential deep learning models such as Recurrent Neural Networks and Transformers to facili...
Gespeichert in:
Veröffentlicht in: | Expert systems with applications 2023-08, Vol.223, p.119911, Article 119911 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sequential recommendation (SR) is an essential component of modern recommender systems. It models the dynamic interest of users based on their sequential interactions. Recently, several studies have utilized sequential deep learning models such as Recurrent Neural Networks and Transformers to facilitate sequential recommendation, which shows promising results. Inspired by the rise of contrastive learning techniques, some methods are devoted to enhance sequential deep learning models by designing contrastive learning loss on self-supervised signals. However, there are still a number of obstacles that make it challenging to efficiently learn user representations by contrastive learning. These issues include, but are not limited to, data sparsity, noisy data, and sampling bias (e.g., false negative), particularly in complex, parameter-intensive models.
In light of these challenges, we examine how to deal with data sparsity and noisy data by implementing contrastive Self-Supervised Learning (SSL) and Momentum Contrast (MoCo) to the sequential recommendation. Except typical in-batch negatives, our basic idea is to maintain a dynamic queue to expand negative samples with a moving-averaged encoder. After being augmented by sequence-level and embedding-level methods, the representations from all historical encoder output are pushed into the dynamic queue, which usually leads to sampling bias when potential positives in the queue are used as expanded negative samples in contrastive learning. To tackle this issue, we integrate momentum updating mechanism with a novel instance weighting mechanism to penalize false negatives and guarantee the model’s efficacy. To this end, we introduce a fresh framework called Momentum Contrastive Learning Framework for Sequential Recommendation (MoCo4SRec). Experiments on eight real-world datasets demonstrate how the suggested strategy might boost model performance over current benchmarks by adopting improved user representations.
•A new framework(MoCo4SRec) that brings together momentum and contrastive SSL with SR.•Samples are perturbed not just at the sequence level but also at the embedding level.•An instance weighting method is introduced to penalize the sampled false negatives. |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2023.119911 |