On the instability of embeddings for recommender systems: the case of Matrix Factorization
Most state-of-the-art top-N collaborative recommender systems work by learning embeddings to jointly represent users and items. Learned embeddings are considered to be effective to solve a variety of tasks. Among others, providing and explaining recommendations. In this paper we question the reliabi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Most state-of-the-art top-N collaborative recommender systems work by
learning embeddings to jointly represent users and items. Learned embeddings
are considered to be effective to solve a variety of tasks. Among others,
providing and explaining recommendations. In this paper we question the
reliability of the embeddings learned by Matrix Factorization (MF). We
empirically demonstrate that, by simply changing the initial values assigned to
the latent factors, the same MF method generates very different embeddings of
items and users, and we highlight that this effect is stronger for less popular
items. To overcome these drawbacks, we present a generalization of MF, called
Nearest Neighbors Matrix Factorization (NNMF). The new method propagates the
information about items and users to their neighbors, speeding up the training
procedure and extending the amount of information that supports recommendations
and representations. We describe the NNMF variants of three common MF
approaches, and with extensive experiments on five different datasets we show
that they strongly mitigate the instability issues of the original MF versions
and they improve the accuracy of recommendations on the long-tail. |
---|---|
DOI: | 10.48550/arxiv.2104.05796 |