Random Scaling and Momentum for Non-smooth Non-convex Optimization

Training neural networks requires optimizing a loss function that may be highly irregular, and in particular neither convex nor smooth. Popular training algorithms are based on stochastic gradient descent with momentum (SGDM), for which classical analysis applies only if the loss is either convex or...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhang, Qinzi, Cutkosky, Ashok
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhang, Qinzi
Cutkosky, Ashok
description Training neural networks requires optimizing a loss function that may be highly irregular, and in particular neither convex nor smooth. Popular training algorithms are based on stochastic gradient descent with momentum (SGDM), for which classical analysis applies only if the loss is either convex or smooth. We show that a very small modification to SGDM closes this gap: simply scale the update at each time point by an exponentially distributed random scalar. The resulting algorithm achieves optimal convergence guarantees. Intriguingly, this result is not derived by a specific analysis of SGDM: instead, it falls naturally out of a more general framework for converting online convex optimization algorithms to non-convex optimization algorithms.
doi_str_mv 10.48550/arxiv.2405.09742
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_09742</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_09742</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-4ad718c40f7ccb453b32255b724c225be10378d4bff3803d878c0f2217915af83</originalsourceid><addsrcrecordid>eNotj7tuwjAYRr0wVIEH6IRfIMFX7Iwl6k2iIAF79NuJqaXYRiEg2qcvTTt951uOdBB6pKQQWkqygP7mrwUTRBakVII9oNUOYpMC3lvofDzi-8MfKbRxuATsUo83KebnkNLwOaJN8dre8PY0-OC_YfApTtHEQXduZ_-bocPL86F6y9fb1_fqaZ3DUrFcQKOotoI4Za0RkhvOmJRGMWHvYFpKuNKNMM5xTXijlbbEMUZVSSU4zTM0_9OOEfWp9wH6r_o3ph5j-A-FEkPk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Random Scaling and Momentum for Non-smooth Non-convex Optimization</title><source>arXiv.org</source><creator>Zhang, Qinzi ; Cutkosky, Ashok</creator><creatorcontrib>Zhang, Qinzi ; Cutkosky, Ashok</creatorcontrib><description>Training neural networks requires optimizing a loss function that may be highly irregular, and in particular neither convex nor smooth. Popular training algorithms are based on stochastic gradient descent with momentum (SGDM), for which classical analysis applies only if the loss is either convex or smooth. We show that a very small modification to SGDM closes this gap: simply scale the update at each time point by an exponentially distributed random scalar. The resulting algorithm achieves optimal convergence guarantees. Intriguingly, this result is not derived by a specific analysis of SGDM: instead, it falls naturally out of a more general framework for converting online convex optimization algorithms to non-convex optimization algorithms.</description><identifier>DOI: 10.48550/arxiv.2405.09742</identifier><language>eng</language><subject>Computer Science - Learning ; Mathematics - Optimization and Control</subject><creationdate>2024-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.09742$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.09742$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Qinzi</creatorcontrib><creatorcontrib>Cutkosky, Ashok</creatorcontrib><title>Random Scaling and Momentum for Non-smooth Non-convex Optimization</title><description>Training neural networks requires optimizing a loss function that may be highly irregular, and in particular neither convex nor smooth. Popular training algorithms are based on stochastic gradient descent with momentum (SGDM), for which classical analysis applies only if the loss is either convex or smooth. We show that a very small modification to SGDM closes this gap: simply scale the update at each time point by an exponentially distributed random scalar. The resulting algorithm achieves optimal convergence guarantees. Intriguingly, this result is not derived by a specific analysis of SGDM: instead, it falls naturally out of a more general framework for converting online convex optimization algorithms to non-convex optimization algorithms.</description><subject>Computer Science - Learning</subject><subject>Mathematics - Optimization and Control</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7tuwjAYRr0wVIEH6IRfIMFX7Iwl6k2iIAF79NuJqaXYRiEg2qcvTTt951uOdBB6pKQQWkqygP7mrwUTRBakVII9oNUOYpMC3lvofDzi-8MfKbRxuATsUo83KebnkNLwOaJN8dre8PY0-OC_YfApTtHEQXduZ_-bocPL86F6y9fb1_fqaZ3DUrFcQKOotoI4Za0RkhvOmJRGMWHvYFpKuNKNMM5xTXijlbbEMUZVSSU4zTM0_9OOEfWp9wH6r_o3ph5j-A-FEkPk</recordid><startdate>20240515</startdate><enddate>20240515</enddate><creator>Zhang, Qinzi</creator><creator>Cutkosky, Ashok</creator><scope>AKY</scope><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20240515</creationdate><title>Random Scaling and Momentum for Non-smooth Non-convex Optimization</title><author>Zhang, Qinzi ; Cutkosky, Ashok</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-4ad718c40f7ccb453b32255b724c225be10378d4bff3803d878c0f2217915af83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Learning</topic><topic>Mathematics - Optimization and Control</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Qinzi</creatorcontrib><creatorcontrib>Cutkosky, Ashok</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Qinzi</au><au>Cutkosky, Ashok</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Random Scaling and Momentum for Non-smooth Non-convex Optimization</atitle><date>2024-05-15</date><risdate>2024</risdate><abstract>Training neural networks requires optimizing a loss function that may be highly irregular, and in particular neither convex nor smooth. Popular training algorithms are based on stochastic gradient descent with momentum (SGDM), for which classical analysis applies only if the loss is either convex or smooth. We show that a very small modification to SGDM closes this gap: simply scale the update at each time point by an exponentially distributed random scalar. The resulting algorithm achieves optimal convergence guarantees. Intriguingly, this result is not derived by a specific analysis of SGDM: instead, it falls naturally out of a more general framework for converting online convex optimization algorithms to non-convex optimization algorithms.</abstract><doi>10.48550/arxiv.2405.09742</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.09742
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_09742
source arXiv.org
subjects Computer Science - Learning
Mathematics - Optimization and Control
title Random Scaling and Momentum for Non-smooth Non-convex Optimization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T01%3A17%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Random%20Scaling%20and%20Momentum%20for%20Non-smooth%20Non-convex%20Optimization&rft.au=Zhang,%20Qinzi&rft.date=2024-05-15&rft_id=info:doi/10.48550/arxiv.2405.09742&rft_dat=%3Carxiv_GOX%3E2405_09742%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true