Do Not Wait: Learning Re-Ranking Model Without User Feedback At Serving Time in E-Commerce
Recommender systems have been widely used in e-commerce, and re-ranking models are playing an increasingly significant role in the domain, which leverages the inter-item influence and determines the final recommendation lists. Online learning methods keep updating a deployed model with the latest av...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recommender systems have been widely used in e-commerce, and re-ranking
models are playing an increasingly significant role in the domain, which
leverages the inter-item influence and determines the final recommendation
lists. Online learning methods keep updating a deployed model with the latest
available samples to capture the shifting of the underlying data distribution
in e-commerce. However, they depend on the availability of real user feedback,
which may be delayed by hours or even days, such as item purchases, leading to
a lag in model enhancement. In this paper, we propose a novel extension of
online learning methods for re-ranking modeling, which we term LAST, an acronym
for Learning At Serving Time. It circumvents the requirement of user feedback
by using a surrogate model to provide the instructional signal needed to steer
model improvement. Upon receiving an online request, LAST finds and applies a
model modification on the fly before generating a recommendation result for the
request. The modification is request-specific and transient. It means the
modification is tailored to and only to the current request to capture the
specific context of the request. After a request, the modification is
discarded, which helps to prevent error propagation and stabilizes the online
learning procedure since the predictions of the surrogate model may be
inaccurate. Most importantly, as a complement to feedback-based online learning
methods, LAST can be seamlessly integrated into existing online learning
systems to create a more adaptive and responsive recommendation experience.
Comprehensive experiments, both offline and online, affirm that LAST
outperforms state-of-the-art re-ranking models. |
---|---|
DOI: | 10.48550/arxiv.2406.14004 |