Cascading Bandits: Learning to Rank in the Cascade Model
A search engine usually outputs a list of $K$ web pages. The user examines this list, from the first web page to the last, and chooses the first attractive page. This model of user behavior is known as the cascade model. In this paper, we propose cascading bandits, a learning variant of the cascade...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Kveton, Branislav Szepesvari, Csaba Wen, Zheng Ashkan, Azin |
description | A search engine usually outputs a list of $K$ web pages. The user examines
this list, from the first web page to the last, and chooses the first
attractive page. This model of user behavior is known as the cascade model. In
this paper, we propose cascading bandits, a learning variant of the cascade
model where the objective is to identify $K$ most attractive items. We
formulate our problem as a stochastic combinatorial partial monitoring problem.
We propose two algorithms for solving it, CascadeUCB1 and CascadeKL-UCB. We
also prove gap-dependent upper bounds on the regret of these algorithms and
derive a lower bound on the regret in cascading bandits. The lower bound
matches the upper bound of CascadeKL-UCB up to a logarithmic factor. We
experiment with our algorithms on several problems. The algorithms perform
surprisingly well even when our modeling assumptions are violated. |
doi_str_mv | 10.48550/arxiv.1502.02763 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1502_02763</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1502_02763</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-8d916ca3ebda9e403b6283c55a28bd4ca02df7c7b6475d1a5af4adeb00488cf83</originalsourceid><addsrcrecordid>eNotj8mKwkAYhPsyB8n4AJ6mXyCx07tzG4MbRITBe_h7iTaT6UgSRN_e9VRQFFX1ITTJSca1EGQK3SWcs1wQmhGqJBshXUBvwYV4wHOILgz9Ny49dPHhDC3-hfiHQ8TD0eNX1ONt63zziT5qaHo_fmuC9svFvlin5W61KX7KFKRiqXazXFpg3jiYeU6YkVQzKwRQbRy3QKirlVVGciVcDgJqfp8whHCtba1Zgr5etc_r1akL_9BdqwdC9URgNwHiQGM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Cascading Bandits: Learning to Rank in the Cascade Model</title><source>arXiv.org</source><creator>Kveton, Branislav ; Szepesvari, Csaba ; Wen, Zheng ; Ashkan, Azin</creator><creatorcontrib>Kveton, Branislav ; Szepesvari, Csaba ; Wen, Zheng ; Ashkan, Azin</creatorcontrib><description>A search engine usually outputs a list of $K$ web pages. The user examines
this list, from the first web page to the last, and chooses the first
attractive page. This model of user behavior is known as the cascade model. In
this paper, we propose cascading bandits, a learning variant of the cascade
model where the objective is to identify $K$ most attractive items. We
formulate our problem as a stochastic combinatorial partial monitoring problem.
We propose two algorithms for solving it, CascadeUCB1 and CascadeKL-UCB. We
also prove gap-dependent upper bounds on the regret of these algorithms and
derive a lower bound on the regret in cascading bandits. The lower bound
matches the upper bound of CascadeKL-UCB up to a logarithmic factor. We
experiment with our algorithms on several problems. The algorithms perform
surprisingly well even when our modeling assumptions are violated.</description><identifier>DOI: 10.48550/arxiv.1502.02763</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2015-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1502.02763$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1502.02763$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kveton, Branislav</creatorcontrib><creatorcontrib>Szepesvari, Csaba</creatorcontrib><creatorcontrib>Wen, Zheng</creatorcontrib><creatorcontrib>Ashkan, Azin</creatorcontrib><title>Cascading Bandits: Learning to Rank in the Cascade Model</title><description>A search engine usually outputs a list of $K$ web pages. The user examines
this list, from the first web page to the last, and chooses the first
attractive page. This model of user behavior is known as the cascade model. In
this paper, we propose cascading bandits, a learning variant of the cascade
model where the objective is to identify $K$ most attractive items. We
formulate our problem as a stochastic combinatorial partial monitoring problem.
We propose two algorithms for solving it, CascadeUCB1 and CascadeKL-UCB. We
also prove gap-dependent upper bounds on the regret of these algorithms and
derive a lower bound on the regret in cascading bandits. The lower bound
matches the upper bound of CascadeKL-UCB up to a logarithmic factor. We
experiment with our algorithms on several problems. The algorithms perform
surprisingly well even when our modeling assumptions are violated.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8mKwkAYhPsyB8n4AJ6mXyCx07tzG4MbRITBe_h7iTaT6UgSRN_e9VRQFFX1ITTJSca1EGQK3SWcs1wQmhGqJBshXUBvwYV4wHOILgz9Ny49dPHhDC3-hfiHQ8TD0eNX1ONt63zziT5qaHo_fmuC9svFvlin5W61KX7KFKRiqXazXFpg3jiYeU6YkVQzKwRQbRy3QKirlVVGciVcDgJqfp8whHCtba1Zgr5etc_r1akL_9BdqwdC9URgNwHiQGM</recordid><startdate>20150209</startdate><enddate>20150209</enddate><creator>Kveton, Branislav</creator><creator>Szepesvari, Csaba</creator><creator>Wen, Zheng</creator><creator>Ashkan, Azin</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20150209</creationdate><title>Cascading Bandits: Learning to Rank in the Cascade Model</title><author>Kveton, Branislav ; Szepesvari, Csaba ; Wen, Zheng ; Ashkan, Azin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-8d916ca3ebda9e403b6283c55a28bd4ca02df7c7b6475d1a5af4adeb00488cf83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Kveton, Branislav</creatorcontrib><creatorcontrib>Szepesvari, Csaba</creatorcontrib><creatorcontrib>Wen, Zheng</creatorcontrib><creatorcontrib>Ashkan, Azin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kveton, Branislav</au><au>Szepesvari, Csaba</au><au>Wen, Zheng</au><au>Ashkan, Azin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cascading Bandits: Learning to Rank in the Cascade Model</atitle><date>2015-02-09</date><risdate>2015</risdate><abstract>A search engine usually outputs a list of $K$ web pages. The user examines
this list, from the first web page to the last, and chooses the first
attractive page. This model of user behavior is known as the cascade model. In
this paper, we propose cascading bandits, a learning variant of the cascade
model where the objective is to identify $K$ most attractive items. We
formulate our problem as a stochastic combinatorial partial monitoring problem.
We propose two algorithms for solving it, CascadeUCB1 and CascadeKL-UCB. We
also prove gap-dependent upper bounds on the regret of these algorithms and
derive a lower bound on the regret in cascading bandits. The lower bound
matches the upper bound of CascadeKL-UCB up to a logarithmic factor. We
experiment with our algorithms on several problems. The algorithms perform
surprisingly well even when our modeling assumptions are violated.</abstract><doi>10.48550/arxiv.1502.02763</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1502.02763 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1502_02763 |
source | arXiv.org |
subjects | Computer Science - Learning Statistics - Machine Learning |
title | Cascading Bandits: Learning to Rank in the Cascade Model |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T05%3A23%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cascading%20Bandits:%20Learning%20to%20Rank%20in%20the%20Cascade%20Model&rft.au=Kveton,%20Branislav&rft.date=2015-02-09&rft_id=info:doi/10.48550/arxiv.1502.02763&rft_dat=%3Carxiv_GOX%3E1502_02763%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |