Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the complexity of this method for computing an approximat...
Gespeichert in:
Veröffentlicht in: | SIAM journal on optimization 2013-01, Vol.23 (4), p.2341-2368 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2368 |
---|---|
container_issue | 4 |
container_start_page | 2341 |
container_title | SIAM journal on optimization |
container_volume | 23 |
creator | Ghadimi, Saeed Lan, Guanghui |
description | In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a postoptimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and we show that such modification allows us to improve significantly the large-deviation properties of the algorithm. These methods are then specialized for solving a class of simulation-based optimization problems in which only stochastic zeroth-order information is available. [PUBLICATION ABSTRACT] |
doi_str_mv | 10.1137/120880811 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_1464737794</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3144954271</sourcerecordid><originalsourceid>FETCH-LOGICAL-c323t-8b063bdaaf3f8a2fd2f8188db7933e39abb7c431c9e53261aad44249c28f9e9b3</originalsourceid><addsrcrecordid>eNpNkE1LAzEYhIMoWKsH_0HAk4dokjfdJEcptgrVFdSLlyWbj3aL3dQkFf33tlTE08zhmRkYhM4ZvWIM5DXjVCmqGDtAA0b1iEim9OHOjzipOIhjdJLzklKqdKUGqH4u0S5MLp3Fky7lQrDpHX7zKZYFqZPzCT_4sogu4xATfoy9jf2n_8L_gk8pzpNZrbp-foqOgnnP_uxXh-h1cvsyviOzeno_vpkRCxwKUS2toHXGBAjK8OB4UEwp10oN4EGbtpVWALPaj4BXzBgnBBfachW01y0M0cW-d53ix8bn0izjJvXbyYaJSkiQUostdbmnbIo5Jx-adepWJn03jDa7v5q_v-AHqmxctg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1464737794</pqid></control><display><type>article</type><title>Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming</title><source>SIAM Journals Online</source><creator>Ghadimi, Saeed ; Lan, Guanghui</creator><creatorcontrib>Ghadimi, Saeed ; Lan, Guanghui</creatorcontrib><description>In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a postoptimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and we show that such modification allows us to improve significantly the large-deviation properties of the algorithm. These methods are then specialized for solving a class of simulation-based optimization problems in which only stochastic zeroth-order information is available. [PUBLICATION ABSTRACT]</description><identifier>ISSN: 1052-6234</identifier><identifier>EISSN: 1095-7189</identifier><identifier>DOI: 10.1137/120880811</identifier><language>eng</language><publisher>Philadelphia: Society for Industrial and Applied Mathematics</publisher><subject>Algorithms ; Approximation ; Methods ; Optimization ; Random variables ; Simulation</subject><ispartof>SIAM journal on optimization, 2013-01, Vol.23 (4), p.2341-2368</ispartof><rights>2013, Society for Industrial and Applied Mathematics</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c323t-8b063bdaaf3f8a2fd2f8188db7933e39abb7c431c9e53261aad44249c28f9e9b3</citedby><cites>FETCH-LOGICAL-c323t-8b063bdaaf3f8a2fd2f8188db7933e39abb7c431c9e53261aad44249c28f9e9b3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,3171,27901,27902</link.rule.ids></links><search><creatorcontrib>Ghadimi, Saeed</creatorcontrib><creatorcontrib>Lan, Guanghui</creatorcontrib><title>Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming</title><title>SIAM journal on optimization</title><description>In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a postoptimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and we show that such modification allows us to improve significantly the large-deviation properties of the algorithm. These methods are then specialized for solving a class of simulation-based optimization problems in which only stochastic zeroth-order information is available. [PUBLICATION ABSTRACT]</description><subject>Algorithms</subject><subject>Approximation</subject><subject>Methods</subject><subject>Optimization</subject><subject>Random variables</subject><subject>Simulation</subject><issn>1052-6234</issn><issn>1095-7189</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNpNkE1LAzEYhIMoWKsH_0HAk4dokjfdJEcptgrVFdSLlyWbj3aL3dQkFf33tlTE08zhmRkYhM4ZvWIM5DXjVCmqGDtAA0b1iEim9OHOjzipOIhjdJLzklKqdKUGqH4u0S5MLp3Fky7lQrDpHX7zKZYFqZPzCT_4sogu4xATfoy9jf2n_8L_gk8pzpNZrbp-foqOgnnP_uxXh-h1cvsyviOzeno_vpkRCxwKUS2toHXGBAjK8OB4UEwp10oN4EGbtpVWALPaj4BXzBgnBBfachW01y0M0cW-d53ix8bn0izjJvXbyYaJSkiQUostdbmnbIo5Jx-adepWJn03jDa7v5q_v-AHqmxctg</recordid><startdate>20130101</startdate><enddate>20130101</enddate><creator>Ghadimi, Saeed</creator><creator>Lan, Guanghui</creator><general>Society for Industrial and Applied Mathematics</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7RQ</scope><scope>7WY</scope><scope>7WZ</scope><scope>7X2</scope><scope>7XB</scope><scope>87Z</scope><scope>88A</scope><scope>88F</scope><scope>88I</scope><scope>88K</scope><scope>8AL</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>D1I</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>KB.</scope><scope>L.-</scope><scope>L6V</scope><scope>LK8</scope><scope>M0C</scope><scope>M0K</scope><scope>M0N</scope><scope>M1Q</scope><scope>M2O</scope><scope>M2P</scope><scope>M2T</scope><scope>M7P</scope><scope>M7S</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PATMY</scope><scope>PDBOC</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PKEHL</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>PYCSY</scope><scope>Q9U</scope><scope>U9A</scope></search><sort><creationdate>20130101</creationdate><title>Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming</title><author>Ghadimi, Saeed ; Lan, Guanghui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c323t-8b063bdaaf3f8a2fd2f8188db7933e39abb7c431c9e53261aad44249c28f9e9b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Algorithms</topic><topic>Approximation</topic><topic>Methods</topic><topic>Optimization</topic><topic>Random variables</topic><topic>Simulation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ghadimi, Saeed</creatorcontrib><creatorcontrib>Lan, Guanghui</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Career & Technical Education Database</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>Agricultural Science Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Biology Database (Alumni Edition)</collection><collection>Military Database (Alumni Edition)</collection><collection>Science Database (Alumni Edition)</collection><collection>Telecommunications (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>Agricultural & Environmental Science Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>Materials Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Biological Science Collection</collection><collection>ABI/INFORM Global</collection><collection>Agricultural Science Database</collection><collection>Computing Database</collection><collection>Military Database</collection><collection>Research Library</collection><collection>Science Database</collection><collection>Telecommunications Database</collection><collection>Biological Science Database</collection><collection>Engineering Database</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Environmental Science Database</collection><collection>Materials Science Collection</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied & Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>Environmental Science Collection</collection><collection>ProQuest Central Basic</collection><jtitle>SIAM journal on optimization</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ghadimi, Saeed</au><au>Lan, Guanghui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming</atitle><jtitle>SIAM journal on optimization</jtitle><date>2013-01-01</date><risdate>2013</risdate><volume>23</volume><issue>4</issue><spage>2341</spage><epage>2368</epage><pages>2341-2368</pages><issn>1052-6234</issn><eissn>1095-7189</eissn><abstract>In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a postoptimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and we show that such modification allows us to improve significantly the large-deviation properties of the algorithm. These methods are then specialized for solving a class of simulation-based optimization problems in which only stochastic zeroth-order information is available. [PUBLICATION ABSTRACT]</abstract><cop>Philadelphia</cop><pub>Society for Industrial and Applied Mathematics</pub><doi>10.1137/120880811</doi><tpages>28</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1052-6234 |
ispartof | SIAM journal on optimization, 2013-01, Vol.23 (4), p.2341-2368 |
issn | 1052-6234 1095-7189 |
language | eng |
recordid | cdi_proquest_journals_1464737794 |
source | SIAM Journals Online |
subjects | Algorithms Approximation Methods Optimization Random variables Simulation |
title | Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T17%3A11%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Stochastic%20First-%20and%20Zeroth-Order%20Methods%20for%20Nonconvex%20Stochastic%20Programming&rft.jtitle=SIAM%20journal%20on%20optimization&rft.au=Ghadimi,%20Saeed&rft.date=2013-01-01&rft.volume=23&rft.issue=4&rft.spage=2341&rft.epage=2368&rft.pages=2341-2368&rft.issn=1052-6234&rft.eissn=1095-7189&rft_id=info:doi/10.1137/120880811&rft_dat=%3Cproquest_cross%3E3144954271%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1464737794&rft_id=info:pmid/&rfr_iscdi=true |