Matching While Learning
Platforms face a cold start problem whenever new users arrive: namely, the platform must learn attributes of new users (explore) in order to match them better in the future (exploit). How should a platform handle cold starts when there are limited quantities of the items being recommended? For insta...
Gespeichert in:
Veröffentlicht in: | Operations research 2021-03, Vol.69 (2), p.655-681 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Platforms face a
cold start problem
whenever new users arrive: namely, the platform must learn attributes of new users (explore) in order to match them better in the future (exploit). How should a platform handle cold starts when there are limited quantities of the items being recommended? For instance, how should a labor market platform match workers to jobs over the lifetime of the worker, given a limited supply of jobs? In this setting, there is one multiarmed bandit problem for each worker, coupled together by the constrained supply of jobs of different types. A solution is developed to this problem. It is found that the platform should estimate a shadow price for each job type, and for each worker, adjust payoffs by these prices (i) to balance learning with payoffs early on and (ii) to myopically match them thereafter.
We consider the problem faced by a service platform that needs to match limited supply with demand while learning the attributes of new users to match them better in the future. We introduce a benchmark model with heterogeneous workers (demand) and a limited supply of jobs that arrive over time. Job types are known to the platform, but worker types are unknown and must be learned by observing match outcomes. Workers depart after performing a certain number of jobs. The expected payoff from a match depends on the pair of types, and the goal is to maximize the steady-state rate of accumulation of payoff. Although we use terminology inspired by labor markets, our framework applies more broadly to platforms where a limited supply of heterogeneous products is matched to users over time. Our main contribution is a complete characterization of the structure of the optimal policy in the limit that each worker performs many jobs. The platform faces a tradeoff for each worker between myopically maximizing payoffs (
exploitation
) and learning the type of the worker (
exploration
). This creates a multitude of multiarmed bandit problems, one for each worker, coupled together by the constraint on availability of jobs of different types (
capacity constraints
). We find that the platform should estimate a shadow price for each job type and use the payoffs adjusted by these prices first to determine its learning goals and then for each worker (i) to balance learning with payoffs during the
exploration phase
and (ii) to myopically match after it has achieved its learning goals during the exploitation phase. |
---|---|
ISSN: | 0030-364X 1526-5463 |
DOI: | 10.1287/opre.2020.2013 |