Zeroth-order non-convex learning via hierarchical dual averaging

We propose a hierarchical version of dual averaging for zeroth-order online non-convex optimization - i.e., learning processes where, at each stage, the optimizer is facing an unknown non-convex loss function and only receives the incurred loss as feedback. The proposed class of policies relies on t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Héliou, Amélie, Martin, Matthieu, Mertikopoulos, Panayotis, Rahier, Thibaud
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Héliou, Amélie
Martin, Matthieu
Mertikopoulos, Panayotis
Rahier, Thibaud
description We propose a hierarchical version of dual averaging for zeroth-order online non-convex optimization - i.e., learning processes where, at each stage, the optimizer is facing an unknown non-convex loss function and only receives the incurred loss as feedback. The proposed class of policies relies on the construction of an online model that aggregates loss information as it arrives, and it consists of two principal components: (a) a regularizer adapted to the Fisher information metric (as opposed to the metric norm of the ambient space); and (b) a principled exploration of the problem's state space based on an adapted hierarchical schedule. This construction enables sharper control of the model's bias and variance, and allows us to derive tight bounds for both the learner's static and dynamic regret - i.e., the regret incurred against the best dynamic policy in hindsight over the horizon of play.
doi_str_mv 10.48550/arxiv.2109.05829
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2109_05829</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2109_05829</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-c56346dd30d24e4c1dd4edec5d596c30cc16f2e72637e8ebd984d72ebc3a95223</originalsourceid><addsrcrecordid>eNotj81Kw0AUhWfThVQfwJXzAhMn85fMTin1BwpuunITbu-9bQbipIw21Lc3VjfnwPngwCfEba0r13qv76Gc01SZWsdK-9bEK_HwzmX86tVYiIvMY1Y45onPcmAoOeWDnBLIPnGBgn1CGCSd5oBpXg4zvxaLPQyffPPfS7F9Wm9XL2rz9vy6etwoCE1U6IN1gchqMo4d1kSOidGTjwGtRqzD3nBjgm245R3F1lFjeIcWojfGLsXd3-1FoTuW9AHlu_tV6S4q9geHAUTF</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Zeroth-order non-convex learning via hierarchical dual averaging</title><source>arXiv.org</source><creator>Héliou, Amélie ; Martin, Matthieu ; Mertikopoulos, Panayotis ; Rahier, Thibaud</creator><creatorcontrib>Héliou, Amélie ; Martin, Matthieu ; Mertikopoulos, Panayotis ; Rahier, Thibaud</creatorcontrib><description>We propose a hierarchical version of dual averaging for zeroth-order online non-convex optimization - i.e., learning processes where, at each stage, the optimizer is facing an unknown non-convex loss function and only receives the incurred loss as feedback. The proposed class of policies relies on the construction of an online model that aggregates loss information as it arrives, and it consists of two principal components: (a) a regularizer adapted to the Fisher information metric (as opposed to the metric norm of the ambient space); and (b) a principled exploration of the problem's state space based on an adapted hierarchical schedule. This construction enables sharper control of the model's bias and variance, and allows us to derive tight bounds for both the learner's static and dynamic regret - i.e., the regret incurred against the best dynamic policy in hindsight over the horizon of play.</description><identifier>DOI: 10.48550/arxiv.2109.05829</identifier><language>eng</language><subject>Computer Science - Learning ; Mathematics - Optimization and Control</subject><creationdate>2021-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2109.05829$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2109.05829$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Héliou, Amélie</creatorcontrib><creatorcontrib>Martin, Matthieu</creatorcontrib><creatorcontrib>Mertikopoulos, Panayotis</creatorcontrib><creatorcontrib>Rahier, Thibaud</creatorcontrib><title>Zeroth-order non-convex learning via hierarchical dual averaging</title><description>We propose a hierarchical version of dual averaging for zeroth-order online non-convex optimization - i.e., learning processes where, at each stage, the optimizer is facing an unknown non-convex loss function and only receives the incurred loss as feedback. The proposed class of policies relies on the construction of an online model that aggregates loss information as it arrives, and it consists of two principal components: (a) a regularizer adapted to the Fisher information metric (as opposed to the metric norm of the ambient space); and (b) a principled exploration of the problem's state space based on an adapted hierarchical schedule. This construction enables sharper control of the model's bias and variance, and allows us to derive tight bounds for both the learner's static and dynamic regret - i.e., the regret incurred against the best dynamic policy in hindsight over the horizon of play.</description><subject>Computer Science - Learning</subject><subject>Mathematics - Optimization and Control</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81Kw0AUhWfThVQfwJXzAhMn85fMTin1BwpuunITbu-9bQbipIw21Lc3VjfnwPngwCfEba0r13qv76Gc01SZWsdK-9bEK_HwzmX86tVYiIvMY1Y45onPcmAoOeWDnBLIPnGBgn1CGCSd5oBpXg4zvxaLPQyffPPfS7F9Wm9XL2rz9vy6etwoCE1U6IN1gchqMo4d1kSOidGTjwGtRqzD3nBjgm245R3F1lFjeIcWojfGLsXd3-1FoTuW9AHlu_tV6S4q9geHAUTF</recordid><startdate>20210913</startdate><enddate>20210913</enddate><creator>Héliou, Amélie</creator><creator>Martin, Matthieu</creator><creator>Mertikopoulos, Panayotis</creator><creator>Rahier, Thibaud</creator><scope>AKY</scope><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20210913</creationdate><title>Zeroth-order non-convex learning via hierarchical dual averaging</title><author>Héliou, Amélie ; Martin, Matthieu ; Mertikopoulos, Panayotis ; Rahier, Thibaud</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-c56346dd30d24e4c1dd4edec5d596c30cc16f2e72637e8ebd984d72ebc3a95223</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><topic>Mathematics - Optimization and Control</topic><toplevel>online_resources</toplevel><creatorcontrib>Héliou, Amélie</creatorcontrib><creatorcontrib>Martin, Matthieu</creatorcontrib><creatorcontrib>Mertikopoulos, Panayotis</creatorcontrib><creatorcontrib>Rahier, Thibaud</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Héliou, Amélie</au><au>Martin, Matthieu</au><au>Mertikopoulos, Panayotis</au><au>Rahier, Thibaud</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Zeroth-order non-convex learning via hierarchical dual averaging</atitle><date>2021-09-13</date><risdate>2021</risdate><abstract>We propose a hierarchical version of dual averaging for zeroth-order online non-convex optimization - i.e., learning processes where, at each stage, the optimizer is facing an unknown non-convex loss function and only receives the incurred loss as feedback. The proposed class of policies relies on the construction of an online model that aggregates loss information as it arrives, and it consists of two principal components: (a) a regularizer adapted to the Fisher information metric (as opposed to the metric norm of the ambient space); and (b) a principled exploration of the problem's state space based on an adapted hierarchical schedule. This construction enables sharper control of the model's bias and variance, and allows us to derive tight bounds for both the learner's static and dynamic regret - i.e., the regret incurred against the best dynamic policy in hindsight over the horizon of play.</abstract><doi>10.48550/arxiv.2109.05829</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2109.05829
ispartof
issn
language eng
recordid cdi_arxiv_primary_2109_05829
source arXiv.org
subjects Computer Science - Learning
Mathematics - Optimization and Control
title Zeroth-order non-convex learning via hierarchical dual averaging
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T18%3A27%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Zeroth-order%20non-convex%20learning%20via%20hierarchical%20dual%20averaging&rft.au=H%C3%A9liou,%20Am%C3%A9lie&rft.date=2021-09-13&rft_id=info:doi/10.48550/arxiv.2109.05829&rft_dat=%3Carxiv_GOX%3E2109_05829%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true