Incentivized Learning in Principal-Agent Bandit Games
This work considers a repeated principal-agent bandit game, where the principal can only interact with her environment through the agent. The principal and the agent have misaligned objectives and the choice of action is only left to the agent. However, the principal can influence the agent's d...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Scheid, Antoine Tiapkin, Daniil Boursier, Etienne Capitaine, Aymeric Mhamdi, El Mahdi El Moulines, Eric Jordan, Michael I Durmus, Alain |
description | This work considers a repeated principal-agent bandit game, where the
principal can only interact with her environment through the agent. The
principal and the agent have misaligned objectives and the choice of action is
only left to the agent. However, the principal can influence the agent's
decisions by offering incentives which add up to his rewards. The principal
aims to iteratively learn an incentive policy to maximize her own total
utility. This framework extends usual bandit problems and is motivated by
several practical applications, such as healthcare or ecological taxation,
where traditionally used mechanism design theories often overlook the learning
aspect of the problem. We present nearly optimal (with respect to a horizon
$T$) learning algorithms for the principal's regret in both multi-armed and
linear contextual settings. Finally, we support our theoretical guarantees
through numerical experiments. |
doi_str_mv | 10.48550/arxiv.2403.03811 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_03811</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_03811</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2403_038113</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1jMwtjA05GQw9cxLTs0rySzLrEpNUfBJTSzKy8xLV8jMUwgoysxLzixIzNF1TAeqUHBKzEvJLFFwT8xNLeZhYE1LzClO5YXS3Azybq4hzh66YAviC4oycxOLKuNBFsWDLTImrAIAR_EyOw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Incentivized Learning in Principal-Agent Bandit Games</title><source>arXiv.org</source><creator>Scheid, Antoine ; Tiapkin, Daniil ; Boursier, Etienne ; Capitaine, Aymeric ; Mhamdi, El Mahdi El ; Moulines, Eric ; Jordan, Michael I ; Durmus, Alain</creator><creatorcontrib>Scheid, Antoine ; Tiapkin, Daniil ; Boursier, Etienne ; Capitaine, Aymeric ; Mhamdi, El Mahdi El ; Moulines, Eric ; Jordan, Michael I ; Durmus, Alain</creatorcontrib><description>This work considers a repeated principal-agent bandit game, where the
principal can only interact with her environment through the agent. The
principal and the agent have misaligned objectives and the choice of action is
only left to the agent. However, the principal can influence the agent's
decisions by offering incentives which add up to his rewards. The principal
aims to iteratively learn an incentive policy to maximize her own total
utility. This framework extends usual bandit problems and is motivated by
several practical applications, such as healthcare or ecological taxation,
where traditionally used mechanism design theories often overlook the learning
aspect of the problem. We present nearly optimal (with respect to a horizon
$T$) learning algorithms for the principal's regret in both multi-armed and
linear contextual settings. Finally, we support our theoretical guarantees
through numerical experiments.</description><identifier>DOI: 10.48550/arxiv.2403.03811</identifier><language>eng</language><subject>Computer Science - Computer Science and Game Theory ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2024-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.03811$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.03811$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Scheid, Antoine</creatorcontrib><creatorcontrib>Tiapkin, Daniil</creatorcontrib><creatorcontrib>Boursier, Etienne</creatorcontrib><creatorcontrib>Capitaine, Aymeric</creatorcontrib><creatorcontrib>Mhamdi, El Mahdi El</creatorcontrib><creatorcontrib>Moulines, Eric</creatorcontrib><creatorcontrib>Jordan, Michael I</creatorcontrib><creatorcontrib>Durmus, Alain</creatorcontrib><title>Incentivized Learning in Principal-Agent Bandit Games</title><description>This work considers a repeated principal-agent bandit game, where the
principal can only interact with her environment through the agent. The
principal and the agent have misaligned objectives and the choice of action is
only left to the agent. However, the principal can influence the agent's
decisions by offering incentives which add up to his rewards. The principal
aims to iteratively learn an incentive policy to maximize her own total
utility. This framework extends usual bandit problems and is motivated by
several practical applications, such as healthcare or ecological taxation,
where traditionally used mechanism design theories often overlook the learning
aspect of the problem. We present nearly optimal (with respect to a horizon
$T$) learning algorithms for the principal's regret in both multi-armed and
linear contextual settings. Finally, we support our theoretical guarantees
through numerical experiments.</description><subject>Computer Science - Computer Science and Game Theory</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1jMwtjA05GQw9cxLTs0rySzLrEpNUfBJTSzKy8xLV8jMUwgoysxLzixIzNF1TAeqUHBKzEvJLFFwT8xNLeZhYE1LzClO5YXS3Azybq4hzh66YAviC4oycxOLKuNBFsWDLTImrAIAR_EyOw</recordid><startdate>20240306</startdate><enddate>20240306</enddate><creator>Scheid, Antoine</creator><creator>Tiapkin, Daniil</creator><creator>Boursier, Etienne</creator><creator>Capitaine, Aymeric</creator><creator>Mhamdi, El Mahdi El</creator><creator>Moulines, Eric</creator><creator>Jordan, Michael I</creator><creator>Durmus, Alain</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20240306</creationdate><title>Incentivized Learning in Principal-Agent Bandit Games</title><author>Scheid, Antoine ; Tiapkin, Daniil ; Boursier, Etienne ; Capitaine, Aymeric ; Mhamdi, El Mahdi El ; Moulines, Eric ; Jordan, Michael I ; Durmus, Alain</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2403_038113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Science and Game Theory</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Scheid, Antoine</creatorcontrib><creatorcontrib>Tiapkin, Daniil</creatorcontrib><creatorcontrib>Boursier, Etienne</creatorcontrib><creatorcontrib>Capitaine, Aymeric</creatorcontrib><creatorcontrib>Mhamdi, El Mahdi El</creatorcontrib><creatorcontrib>Moulines, Eric</creatorcontrib><creatorcontrib>Jordan, Michael I</creatorcontrib><creatorcontrib>Durmus, Alain</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Scheid, Antoine</au><au>Tiapkin, Daniil</au><au>Boursier, Etienne</au><au>Capitaine, Aymeric</au><au>Mhamdi, El Mahdi El</au><au>Moulines, Eric</au><au>Jordan, Michael I</au><au>Durmus, Alain</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Incentivized Learning in Principal-Agent Bandit Games</atitle><date>2024-03-06</date><risdate>2024</risdate><abstract>This work considers a repeated principal-agent bandit game, where the
principal can only interact with her environment through the agent. The
principal and the agent have misaligned objectives and the choice of action is
only left to the agent. However, the principal can influence the agent's
decisions by offering incentives which add up to his rewards. The principal
aims to iteratively learn an incentive policy to maximize her own total
utility. This framework extends usual bandit problems and is motivated by
several practical applications, such as healthcare or ecological taxation,
where traditionally used mechanism design theories often overlook the learning
aspect of the problem. We present nearly optimal (with respect to a horizon
$T$) learning algorithms for the principal's regret in both multi-armed and
linear contextual settings. Finally, we support our theoretical guarantees
through numerical experiments.</abstract><doi>10.48550/arxiv.2403.03811</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2403.03811 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2403_03811 |
source | arXiv.org |
subjects | Computer Science - Computer Science and Game Theory Computer Science - Learning Statistics - Machine Learning |
title | Incentivized Learning in Principal-Agent Bandit Games |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T20%3A38%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Incentivized%20Learning%20in%20Principal-Agent%20Bandit%20Games&rft.au=Scheid,%20Antoine&rft.date=2024-03-06&rft_id=info:doi/10.48550/arxiv.2403.03811&rft_dat=%3Carxiv_GOX%3E2403_03811%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |