On the Treatment of Optimization Problems With L1 Penalty Terms via Multiobjective Continuation

We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization. Sparsity is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning, as it ensur...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence 2022-11, Vol.44 (11), p.7797-7808
Hauptverfasser: Bieker, Katharina, Gebken, Bennet, Peitz, Sebastian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 7808
container_issue 11
container_start_page 7797
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 44
creator Bieker, Katharina
Gebken, Bennet
Peitz, Sebastian
description We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization. Sparsity is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning, as it ensures robustness against noisy data and yields models that are easier to interpret due to the small number of relevant terms. It is common practice to enforce sparsity by adding the \ell _1 ℓ1 -norm as a penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when minimizing the main objective and the \ell _1 ℓ1 -norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the penalty method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method specifically tailored to MOPs with two objective functions one of which is the \ell _1 ℓ1 -norm. Our method can be seen as a generalization of homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight gained by this multiobjective approach.
doi_str_mv 10.1109/TPAMI.2021.3114962
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2576652119</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9547772</ieee_id><sourcerecordid>2721429858</sourcerecordid><originalsourceid>FETCH-LOGICAL-c372t-c6733d897fe286b62b74dfc60be9a819a385f7fae4ace382e3c836594800248b3</originalsourceid><addsrcrecordid>eNpdkE1rGzEQQEVpady0f6C5CHrpZR1p9H0MJm0DDvbBpUeh3cwSmf1wJK0h_fXZxCGHnAaG9wbmEfKdsyXnzF3utle3N0tgwJeCc-k0fCAL4JpVDhx8JAvGNVTWgj0jX3LeM8alYuIzORNSKaeFXBC_GWi5R7pLGEqPQ6FjSzeHEvv4P5Q4DnSbxrrDPtN_sdzTNadbHEJXHukO07w9xkBvp25G6z02JR6RrsahxGF60b-ST23oMn57nefk76_r3epPtd78vlldratGGChVo40Qd9aZFsHqWkNt5F3baFajC5a7IKxqTRtQhgaFBRSNFVo5aRkDaWtxTn6e7h7S-DBhLr6PucGuCwOOU_agjNYKOHcz-uMduh-nNP80Uwa4BGeVnSk4UU0ac07Y-kOKfUiPnjP_nN-_5PfP-f1r_lm6OEkREd8Ep6QxBsQTsUl-6A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2721429858</pqid></control><display><type>article</type><title>On the Treatment of Optimization Problems With L1 Penalty Terms via Multiobjective Continuation</title><source>IEEE Electronic Library (IEL)</source><creator>Bieker, Katharina ; Gebken, Bennet ; Peitz, Sebastian</creator><creatorcontrib>Bieker, Katharina ; Gebken, Bennet ; Peitz, Sebastian</creatorcontrib><description><![CDATA[We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization. Sparsity is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning, as it ensures robustness against noisy data and yields models that are easier to interpret due to the small number of relevant terms. It is common practice to enforce sparsity by adding the <inline-formula><tex-math notation="LaTeX">\ell _1</tex-math> <mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="bieker-ieq1-3114962.gif"/> </inline-formula>-norm as a penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when minimizing the main objective and the <inline-formula><tex-math notation="LaTeX">\ell _1</tex-math> <mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="bieker-ieq2-3114962.gif"/> </inline-formula>-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the penalty method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method specifically tailored to MOPs with two objective functions one of which is the <inline-formula><tex-math notation="LaTeX">\ell _1</tex-math> <mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="bieker-ieq3-3114962.gif"/> </inline-formula>-norm. Our method can be seen as a generalization of homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight gained by this multiobjective approach.]]></description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 2160-9292</identifier><identifier>EISSN: 1939-3539</identifier><identifier>DOI: 10.1109/TPAMI.2021.3114962</identifier><identifier>PMID: 34559634</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Continuation methods ; Linear programming ; Machine learning ; Mathematical models ; Medical imaging ; Multiobjective optimization ; Multiple objective analysis ; Neural networks ; nonsmooth optimization ; Optimization ; Pareto optimization ; Robustness (mathematics) ; Signal processing ; Sparsity ; Training</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2022-11, Vol.44 (11), p.7797-7808</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c372t-c6733d897fe286b62b74dfc60be9a819a385f7fae4ace382e3c836594800248b3</citedby><cites>FETCH-LOGICAL-c372t-c6733d897fe286b62b74dfc60be9a819a385f7fae4ace382e3c836594800248b3</cites><orcidid>0000-0002-6762-9730 ; 0000-0002-3389-793X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9547772$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids></links><search><creatorcontrib>Bieker, Katharina</creatorcontrib><creatorcontrib>Gebken, Bennet</creatorcontrib><creatorcontrib>Peitz, Sebastian</creatorcontrib><title>On the Treatment of Optimization Problems With L1 Penalty Terms via Multiobjective Continuation</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><description><![CDATA[We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization. Sparsity is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning, as it ensures robustness against noisy data and yields models that are easier to interpret due to the small number of relevant terms. It is common practice to enforce sparsity by adding the <inline-formula><tex-math notation="LaTeX">\ell _1</tex-math> <mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="bieker-ieq1-3114962.gif"/> </inline-formula>-norm as a penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when minimizing the main objective and the <inline-formula><tex-math notation="LaTeX">\ell _1</tex-math> <mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="bieker-ieq2-3114962.gif"/> </inline-formula>-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the penalty method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method specifically tailored to MOPs with two objective functions one of which is the <inline-formula><tex-math notation="LaTeX">\ell _1</tex-math> <mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="bieker-ieq3-3114962.gif"/> </inline-formula>-norm. Our method can be seen as a generalization of homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight gained by this multiobjective approach.]]></description><subject>Algorithms</subject><subject>Continuation methods</subject><subject>Linear programming</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>Medical imaging</subject><subject>Multiobjective optimization</subject><subject>Multiple objective analysis</subject><subject>Neural networks</subject><subject>nonsmooth optimization</subject><subject>Optimization</subject><subject>Pareto optimization</subject><subject>Robustness (mathematics)</subject><subject>Signal processing</subject><subject>Sparsity</subject><subject>Training</subject><issn>0162-8828</issn><issn>2160-9292</issn><issn>1939-3539</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNpdkE1rGzEQQEVpady0f6C5CHrpZR1p9H0MJm0DDvbBpUeh3cwSmf1wJK0h_fXZxCGHnAaG9wbmEfKdsyXnzF3utle3N0tgwJeCc-k0fCAL4JpVDhx8JAvGNVTWgj0jX3LeM8alYuIzORNSKaeFXBC_GWi5R7pLGEqPQ6FjSzeHEvv4P5Q4DnSbxrrDPtN_sdzTNadbHEJXHukO07w9xkBvp25G6z02JR6RrsahxGF60b-ST23oMn57nefk76_r3epPtd78vlldratGGChVo40Qd9aZFsHqWkNt5F3baFajC5a7IKxqTRtQhgaFBRSNFVo5aRkDaWtxTn6e7h7S-DBhLr6PucGuCwOOU_agjNYKOHcz-uMduh-nNP80Uwa4BGeVnSk4UU0ac07Y-kOKfUiPnjP_nN-_5PfP-f1r_lm6OEkREd8Ep6QxBsQTsUl-6A</recordid><startdate>20221101</startdate><enddate>20221101</enddate><creator>Bieker, Katharina</creator><creator>Gebken, Bennet</creator><creator>Peitz, Sebastian</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-6762-9730</orcidid><orcidid>https://orcid.org/0000-0002-3389-793X</orcidid></search><sort><creationdate>20221101</creationdate><title>On the Treatment of Optimization Problems With L1 Penalty Terms via Multiobjective Continuation</title><author>Bieker, Katharina ; Gebken, Bennet ; Peitz, Sebastian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c372t-c6733d897fe286b62b74dfc60be9a819a385f7fae4ace382e3c836594800248b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Continuation methods</topic><topic>Linear programming</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>Medical imaging</topic><topic>Multiobjective optimization</topic><topic>Multiple objective analysis</topic><topic>Neural networks</topic><topic>nonsmooth optimization</topic><topic>Optimization</topic><topic>Pareto optimization</topic><topic>Robustness (mathematics)</topic><topic>Signal processing</topic><topic>Sparsity</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bieker, Katharina</creatorcontrib><creatorcontrib>Gebken, Bennet</creatorcontrib><creatorcontrib>Peitz, Sebastian</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bieker, Katharina</au><au>Gebken, Bennet</au><au>Peitz, Sebastian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On the Treatment of Optimization Problems With L1 Penalty Terms via Multiobjective Continuation</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><date>2022-11-01</date><risdate>2022</risdate><volume>44</volume><issue>11</issue><spage>7797</spage><epage>7808</epage><pages>7797-7808</pages><issn>0162-8828</issn><eissn>2160-9292</eissn><eissn>1939-3539</eissn><coden>ITPIDJ</coden><abstract><![CDATA[We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization. Sparsity is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning, as it ensures robustness against noisy data and yields models that are easier to interpret due to the small number of relevant terms. It is common practice to enforce sparsity by adding the <inline-formula><tex-math notation="LaTeX">\ell _1</tex-math> <mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="bieker-ieq1-3114962.gif"/> </inline-formula>-norm as a penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when minimizing the main objective and the <inline-formula><tex-math notation="LaTeX">\ell _1</tex-math> <mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="bieker-ieq2-3114962.gif"/> </inline-formula>-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the penalty method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method specifically tailored to MOPs with two objective functions one of which is the <inline-formula><tex-math notation="LaTeX">\ell _1</tex-math> <mml:math><mml:msub><mml:mi>ℓ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="bieker-ieq3-3114962.gif"/> </inline-formula>-norm. Our method can be seen as a generalization of homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight gained by this multiobjective approach.]]></abstract><cop>New York</cop><pub>IEEE</pub><pmid>34559634</pmid><doi>10.1109/TPAMI.2021.3114962</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-6762-9730</orcidid><orcidid>https://orcid.org/0000-0002-3389-793X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2022-11, Vol.44 (11), p.7797-7808
issn 0162-8828
2160-9292
1939-3539
language eng
recordid cdi_proquest_miscellaneous_2576652119
source IEEE Electronic Library (IEL)
subjects Algorithms
Continuation methods
Linear programming
Machine learning
Mathematical models
Medical imaging
Multiobjective optimization
Multiple objective analysis
Neural networks
nonsmooth optimization
Optimization
Pareto optimization
Robustness (mathematics)
Signal processing
Sparsity
Training
title On the Treatment of Optimization Problems With L1 Penalty Terms via Multiobjective Continuation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T07%3A04%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20the%20Treatment%20of%20Optimization%20Problems%20With%20L1%20Penalty%20Terms%20via%20Multiobjective%20Continuation&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Bieker,%20Katharina&rft.date=2022-11-01&rft.volume=44&rft.issue=11&rft.spage=7797&rft.epage=7808&rft.pages=7797-7808&rft.issn=0162-8828&rft.eissn=2160-9292&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2021.3114962&rft_dat=%3Cproquest_cross%3E2721429858%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2721429858&rft_id=info:pmid/34559634&rft_ieee_id=9547772&rfr_iscdi=true