Stochastic Approximation with Decision-Dependent Distributions: Asymptotic Normality and Optimality
Journal of Machine Learning Research, 25(90):1-49, 2024 We analyze a stochastic approximation algorithm for decision-dependent problems, wherein the data distribution used by the algorithm evolves along the iterate sequence. The primary examples of such problems appear in performative prediction and...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Cutler, Joshua Díaz, Mateo Drusvyatskiy, Dmitriy |
description | Journal of Machine Learning Research, 25(90):1-49, 2024 We analyze a stochastic approximation algorithm for decision-dependent
problems, wherein the data distribution used by the algorithm evolves along the
iterate sequence. The primary examples of such problems appear in performative
prediction and its multiplayer extensions. We show that under mild assumptions,
the deviation between the average iterate of the algorithm and the solution is
asymptotically normal, with a covariance that clearly decouples the effects of
the gradient noise and the distributional shift. Moreover, building on the work
of H\'ajek and Le Cam, we show that the asymptotic performance of the algorithm
with averaging is locally minimax optimal. |
doi_str_mv | 10.48550/arxiv.2207.04173 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2207_04173</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2207_04173</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-1b4486f6c6a150c9dd0754b00cdfb1c2d1adf16632fefc26a43e58808e73a44d3</originalsourceid><addsrcrecordid>eNotj7tOwzAYhb0woMIDMOEXSLBjxzZsUcNNquhA9-iPL6qlJrFsA83b07Sdjo50LvoQeqCk5KquyRPEo_8tq4rIknAq2S3S33nSe0jZa9yEEKejHyD7acR_Pu9xa7VPJ1e0NtjR2DHj1qccff-zhNILbtI8hDwt_a8pDnDwecYwGrwN2V_sHbpxcEj2_qortHt73a0_is32_XPdbAoQkhW051wJJ7QAWhP9bAyRNe8J0cb1VFeGgnFUCFY563QlgDNbK0WUlQw4N2yFHi-zZ8ouxNN9nLuFtjvTsn8mwFJV</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Stochastic Approximation with Decision-Dependent Distributions: Asymptotic Normality and Optimality</title><source>arXiv.org</source><creator>Cutler, Joshua ; Díaz, Mateo ; Drusvyatskiy, Dmitriy</creator><creatorcontrib>Cutler, Joshua ; Díaz, Mateo ; Drusvyatskiy, Dmitriy</creatorcontrib><description>Journal of Machine Learning Research, 25(90):1-49, 2024 We analyze a stochastic approximation algorithm for decision-dependent
problems, wherein the data distribution used by the algorithm evolves along the
iterate sequence. The primary examples of such problems appear in performative
prediction and its multiplayer extensions. We show that under mild assumptions,
the deviation between the average iterate of the algorithm and the solution is
asymptotically normal, with a covariance that clearly decouples the effects of
the gradient noise and the distributional shift. Moreover, building on the work
of H\'ajek and Le Cam, we show that the asymptotic performance of the algorithm
with averaging is locally minimax optimal.</description><identifier>DOI: 10.48550/arxiv.2207.04173</identifier><language>eng</language><subject>Computer Science - Learning ; Mathematics - Optimization and Control ; Statistics - Machine Learning</subject><creationdate>2022-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2207.04173$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2207.04173$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Cutler, Joshua</creatorcontrib><creatorcontrib>Díaz, Mateo</creatorcontrib><creatorcontrib>Drusvyatskiy, Dmitriy</creatorcontrib><title>Stochastic Approximation with Decision-Dependent Distributions: Asymptotic Normality and Optimality</title><description>Journal of Machine Learning Research, 25(90):1-49, 2024 We analyze a stochastic approximation algorithm for decision-dependent
problems, wherein the data distribution used by the algorithm evolves along the
iterate sequence. The primary examples of such problems appear in performative
prediction and its multiplayer extensions. We show that under mild assumptions,
the deviation between the average iterate of the algorithm and the solution is
asymptotically normal, with a covariance that clearly decouples the effects of
the gradient noise and the distributional shift. Moreover, building on the work
of H\'ajek and Le Cam, we show that the asymptotic performance of the algorithm
with averaging is locally minimax optimal.</description><subject>Computer Science - Learning</subject><subject>Mathematics - Optimization and Control</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7tOwzAYhb0woMIDMOEXSLBjxzZsUcNNquhA9-iPL6qlJrFsA83b07Sdjo50LvoQeqCk5KquyRPEo_8tq4rIknAq2S3S33nSe0jZa9yEEKejHyD7acR_Pu9xa7VPJ1e0NtjR2DHj1qccff-zhNILbtI8hDwt_a8pDnDwecYwGrwN2V_sHbpxcEj2_qortHt73a0_is32_XPdbAoQkhW051wJJ7QAWhP9bAyRNe8J0cb1VFeGgnFUCFY563QlgDNbK0WUlQw4N2yFHi-zZ8ouxNN9nLuFtjvTsn8mwFJV</recordid><startdate>20220708</startdate><enddate>20220708</enddate><creator>Cutler, Joshua</creator><creator>Díaz, Mateo</creator><creator>Drusvyatskiy, Dmitriy</creator><scope>AKY</scope><scope>AKZ</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20220708</creationdate><title>Stochastic Approximation with Decision-Dependent Distributions: Asymptotic Normality and Optimality</title><author>Cutler, Joshua ; Díaz, Mateo ; Drusvyatskiy, Dmitriy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-1b4486f6c6a150c9dd0754b00cdfb1c2d1adf16632fefc26a43e58808e73a44d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><topic>Mathematics - Optimization and Control</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Cutler, Joshua</creatorcontrib><creatorcontrib>Díaz, Mateo</creatorcontrib><creatorcontrib>Drusvyatskiy, Dmitriy</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cutler, Joshua</au><au>Díaz, Mateo</au><au>Drusvyatskiy, Dmitriy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Stochastic Approximation with Decision-Dependent Distributions: Asymptotic Normality and Optimality</atitle><date>2022-07-08</date><risdate>2022</risdate><abstract>Journal of Machine Learning Research, 25(90):1-49, 2024 We analyze a stochastic approximation algorithm for decision-dependent
problems, wherein the data distribution used by the algorithm evolves along the
iterate sequence. The primary examples of such problems appear in performative
prediction and its multiplayer extensions. We show that under mild assumptions,
the deviation between the average iterate of the algorithm and the solution is
asymptotically normal, with a covariance that clearly decouples the effects of
the gradient noise and the distributional shift. Moreover, building on the work
of H\'ajek and Le Cam, we show that the asymptotic performance of the algorithm
with averaging is locally minimax optimal.</abstract><doi>10.48550/arxiv.2207.04173</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2207.04173 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2207_04173 |
source | arXiv.org |
subjects | Computer Science - Learning Mathematics - Optimization and Control Statistics - Machine Learning |
title | Stochastic Approximation with Decision-Dependent Distributions: Asymptotic Normality and Optimality |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T10%3A09%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Stochastic%20Approximation%20with%20Decision-Dependent%20Distributions:%20Asymptotic%20Normality%20and%20Optimality&rft.au=Cutler,%20Joshua&rft.date=2022-07-08&rft_id=info:doi/10.48550/arxiv.2207.04173&rft_dat=%3Carxiv_GOX%3E2207_04173%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |