Graph Diffusion-Based Representation Learning for Sequential Recommendation
Sequential recommendation is a critical part of the flourishing online applications by suggesting appealing items on users' next interactions, where global dependencies among items have proven to be indispensable for enhancing the quality of item representations toward a better understanding of...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on knowledge and data engineering 2024-12, Vol.36 (12), p.8395-8407 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sequential recommendation is a critical part of the flourishing online applications by suggesting appealing items on users' next interactions, where global dependencies among items have proven to be indispensable for enhancing the quality of item representations toward a better understanding of user dynamic preferences. Existing methods rely on pre-defined graphs with shallow Graph Neural Networks to capture such necessary dependencies due to the constraint of the over-smoothing problem. However, this graph representation learning paradigm makes them difficult to satisfy the original expectation because of noisy graph structures and the limited ability of shallow architectures for modeling high-order relations. In this paper, we propose a novel Graph Diffusion Representation-enhanced Attention Network for sequential recommendation, which explores the construction of deeper networks by utilizing graph diffusion on adaptive graph structures for generating expressive item representations. Specifically, we design an adaptive graph generation strategy via leveraging similarity learning between item embeddings, automatically optimizing the input graph topology under the guidance of downstream recommendation tasks. Afterward, we propose a novel graph diffusion paradigm with robustness to over-smoothing, which enriches the learned item representations with sufficient global dependencies for attention-based sequential modeling. Moreover, extensive experiments demonstrate the effectiveness of our approach over state-of-the-art baselines. |
---|---|
ISSN: | 1041-4347 1558-2191 |
DOI: | 10.1109/TKDE.2024.3477621 |