The Randomized Block Coordinate Descent Method in the H\"older Smooth Setting

This work provides the first convergence analysis for the Randomized Block Coordinate Descent method for minimizing a function that is both H\"older smooth and block H\"older smooth. Our analysis applies to objective functions that are non-convex, convex, and strongly convex. For non-conve...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Maia, Leandro Farias, Gutman, David Huckleberry
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Maia, Leandro Farias
Gutman, David Huckleberry
description This work provides the first convergence analysis for the Randomized Block Coordinate Descent method for minimizing a function that is both H\"older smooth and block H\"older smooth. Our analysis applies to objective functions that are non-convex, convex, and strongly convex. For non-convex functions, we show that the expected gradient norm reduces at an $O\left(k^{\frac{\gamma}{1+\gamma}}\right)$ rate, where $k$ is the iteration count and $\gamma$ is the H\"older exponent. For convex functions, we show that the expected suboptimality gap reduces at the rate $O\left(k^{-\gamma}\right)$. In the strongly convex setting, we show this rate for the expected suboptimality gap improves to $O\left(k^{-\frac{2\gamma}{1-\gamma}}\right)$ when $\gamma>1$ and to a linear rate when $\gamma=1$. Notably, these new convergence rates coincide with those furnished in the existing literature for the Lipschitz smooth setting.
doi_str_mv 10.48550/arxiv.2403.08080
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_08080</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_08080</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-3354a0977c5026d383314a091d298c862493fd6a6720783857f7256b55be20133</originalsourceid><addsrcrecordid>eNotj01LxDAYhHPxIKs_wJPBe2uat_noUavuCrsIbo9CyTZvbbBNJBvE9dfbXWUOA8PMwEPIVcHyUgvBbk38dl85LxnkTM86J5tmQPpqvA2T-0FL78fQfdA6hGidNwnpA-479IluMA3BUudpmhert5swWox0O4WQBrrFlJx_vyBnvRn3ePnvC9I8PTb1Klu_LJ_ru3VmpGIZgCgNq5TqBOPSggYojkFheaU7LXlZQW_l3OVMadBC9YoLuRNih5wVAAty_Xd74mk_o5tMPLRHrvbEBb8hEEWT</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>The Randomized Block Coordinate Descent Method in the H\"older Smooth Setting</title><source>arXiv.org</source><creator>Maia, Leandro Farias ; Gutman, David Huckleberry</creator><creatorcontrib>Maia, Leandro Farias ; Gutman, David Huckleberry</creatorcontrib><description>This work provides the first convergence analysis for the Randomized Block Coordinate Descent method for minimizing a function that is both H\"older smooth and block H\"older smooth. Our analysis applies to objective functions that are non-convex, convex, and strongly convex. For non-convex functions, we show that the expected gradient norm reduces at an $O\left(k^{\frac{\gamma}{1+\gamma}}\right)$ rate, where $k$ is the iteration count and $\gamma$ is the H\"older exponent. For convex functions, we show that the expected suboptimality gap reduces at the rate $O\left(k^{-\gamma}\right)$. In the strongly convex setting, we show this rate for the expected suboptimality gap improves to $O\left(k^{-\frac{2\gamma}{1-\gamma}}\right)$ when $\gamma&gt;1$ and to a linear rate when $\gamma=1$. Notably, these new convergence rates coincide with those furnished in the existing literature for the Lipschitz smooth setting.</description><identifier>DOI: 10.48550/arxiv.2403.08080</identifier><language>eng</language><subject>Mathematics - Optimization and Control</subject><creationdate>2024-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.08080$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.08080$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Maia, Leandro Farias</creatorcontrib><creatorcontrib>Gutman, David Huckleberry</creatorcontrib><title>The Randomized Block Coordinate Descent Method in the H\"older Smooth Setting</title><description>This work provides the first convergence analysis for the Randomized Block Coordinate Descent method for minimizing a function that is both H\"older smooth and block H\"older smooth. Our analysis applies to objective functions that are non-convex, convex, and strongly convex. For non-convex functions, we show that the expected gradient norm reduces at an $O\left(k^{\frac{\gamma}{1+\gamma}}\right)$ rate, where $k$ is the iteration count and $\gamma$ is the H\"older exponent. For convex functions, we show that the expected suboptimality gap reduces at the rate $O\left(k^{-\gamma}\right)$. In the strongly convex setting, we show this rate for the expected suboptimality gap improves to $O\left(k^{-\frac{2\gamma}{1-\gamma}}\right)$ when $\gamma&gt;1$ and to a linear rate when $\gamma=1$. Notably, these new convergence rates coincide with those furnished in the existing literature for the Lipschitz smooth setting.</description><subject>Mathematics - Optimization and Control</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj01LxDAYhHPxIKs_wJPBe2uat_noUavuCrsIbo9CyTZvbbBNJBvE9dfbXWUOA8PMwEPIVcHyUgvBbk38dl85LxnkTM86J5tmQPpqvA2T-0FL78fQfdA6hGidNwnpA-479IluMA3BUudpmhert5swWox0O4WQBrrFlJx_vyBnvRn3ePnvC9I8PTb1Klu_LJ_ru3VmpGIZgCgNq5TqBOPSggYojkFheaU7LXlZQW_l3OVMadBC9YoLuRNih5wVAAty_Xd74mk_o5tMPLRHrvbEBb8hEEWT</recordid><startdate>20240312</startdate><enddate>20240312</enddate><creator>Maia, Leandro Farias</creator><creator>Gutman, David Huckleberry</creator><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20240312</creationdate><title>The Randomized Block Coordinate Descent Method in the H\"older Smooth Setting</title><author>Maia, Leandro Farias ; Gutman, David Huckleberry</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-3354a0977c5026d383314a091d298c862493fd6a6720783857f7256b55be20133</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Mathematics - Optimization and Control</topic><toplevel>online_resources</toplevel><creatorcontrib>Maia, Leandro Farias</creatorcontrib><creatorcontrib>Gutman, David Huckleberry</creatorcontrib><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Maia, Leandro Farias</au><au>Gutman, David Huckleberry</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The Randomized Block Coordinate Descent Method in the H\"older Smooth Setting</atitle><date>2024-03-12</date><risdate>2024</risdate><abstract>This work provides the first convergence analysis for the Randomized Block Coordinate Descent method for minimizing a function that is both H\"older smooth and block H\"older smooth. Our analysis applies to objective functions that are non-convex, convex, and strongly convex. For non-convex functions, we show that the expected gradient norm reduces at an $O\left(k^{\frac{\gamma}{1+\gamma}}\right)$ rate, where $k$ is the iteration count and $\gamma$ is the H\"older exponent. For convex functions, we show that the expected suboptimality gap reduces at the rate $O\left(k^{-\gamma}\right)$. In the strongly convex setting, we show this rate for the expected suboptimality gap improves to $O\left(k^{-\frac{2\gamma}{1-\gamma}}\right)$ when $\gamma&gt;1$ and to a linear rate when $\gamma=1$. Notably, these new convergence rates coincide with those furnished in the existing literature for the Lipschitz smooth setting.</abstract><doi>10.48550/arxiv.2403.08080</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.08080
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_08080
source arXiv.org
subjects Mathematics - Optimization and Control
title The Randomized Block Coordinate Descent Method in the H\"older Smooth Setting
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T23%3A12%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Randomized%20Block%20Coordinate%20Descent%20Method%20in%20the%20H%5C%22older%20Smooth%20Setting&rft.au=Maia,%20Leandro%20Farias&rft.date=2024-03-12&rft_id=info:doi/10.48550/arxiv.2403.08080&rft_dat=%3Carxiv_GOX%3E2403_08080%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true