Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem

In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the rectified Gaussian scale mixture (R-GSM) to model the sparsity enforcing prior distribution for the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on signal processing 2018-06, Vol.66 (12), p.3124-3139
Hauptverfasser: Nalci, Alican, Fedorov, Igor, Al-Shoukairi, Maher, Liu, Thomas T., Rao, Bhaskar D.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3139
container_issue 12
container_start_page 3124
container_title IEEE transactions on signal processing
container_volume 66
creator Nalci, Alican
Fedorov, Igor
Al-Shoukairi, Maher
Liu, Thomas T.
Rao, Bhaskar D.
description In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the rectified Gaussian scale mixture (R-GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student's t-distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the expectation-maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R-SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing, and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix.
doi_str_mv 10.1109/TSP.2018.2824286
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TSP_2018_2824286</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8332497</ieee_id><sourcerecordid>2546981572</sourcerecordid><originalsourceid>FETCH-LOGICAL-c393t-7c23215ce0dd304c065c1a0ed8c859f330335520842f0a98355c5c43526639463</originalsourceid><addsrcrecordid>eNpVkctLxDAQh4Movu-Clxy9dE0ySZtcBBFfsD6wCt5CTKca6bZr0i7639tlF8HTzDDf_ObwEXLE2YRzZk6fy8eJYFxPhBZS6HyD7HIjecZkkW-OPVOQKV287pC9lD4Z41KafJvsgORaS4BdUj6h70MdsKLXbkgpuJaW3jVI78J3P0RM1LUV7T-QlnMXE9L7rs3u8d31YYF0ii71tPwa3JJ8jN1bg7MDslW7JuHhuu6Tl6vL54ubbPpwfXtxPs08GOizwgsQXHlkVQVMepYrzx3DSnutTA3AAJQSTEtRM2f0OHjlJSiR52BkDvvkbJU7H95mWHls--gaO49h5uKP7Vyw_zdt-LDv3cJqAVoqMQacrANi9zVg6u0sJI9N41rshmSFkrnRXBVLlK1QH7uUItZ_bzizSxd2dGGXLuzaxXhyvDoJiPiHawAhTQG_yuyDSw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2546981572</pqid></control><display><type>article</type><title>Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem</title><source>IEEE Electronic Library (IEL)</source><creator>Nalci, Alican ; Fedorov, Igor ; Al-Shoukairi, Maher ; Liu, Thomas T. ; Rao, Bhaskar D.</creator><creatorcontrib>Nalci, Alican ; Fedorov, Igor ; Al-Shoukairi, Maher ; Liu, Thomas T. ; Rao, Bhaskar D.</creatorcontrib><description>In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the rectified Gaussian scale mixture (R-GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student's t-distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the expectation-maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R-SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing, and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix.</description><identifier>ISSN: 1053-587X</identifier><identifier>EISSN: 1941-0476</identifier><identifier>DOI: 10.1109/TSP.2018.2824286</identifier><identifier>PMID: 34188433</identifier><identifier>CODEN: ITPRED</identifier><language>eng</language><publisher>IEEE</publisher><subject>Bayes methods ; Estimation ; GSM ; Magnetic resonance imaging ; Matching pursuit algorithms ; Mathematical model ; Non-negative least squares ; rectified Gaussian scale mixtures ; Signal processing algorithms ; sparse Bayesian learning ; sparse signal recovery</subject><ispartof>IEEE transactions on signal processing, 2018-06, Vol.66 (12), p.3124-3139</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c393t-7c23215ce0dd304c065c1a0ed8c859f330335520842f0a98355c5c43526639463</citedby><cites>FETCH-LOGICAL-c393t-7c23215ce0dd304c065c1a0ed8c859f330335520842f0a98355c5c43526639463</cites><orcidid>0000-0002-1978-7486 ; 0000-0002-5808-9947 ; 0000-0002-7185-376X ; 0000-0002-8204-9515</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8332497$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,314,780,784,796,885,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8332497$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Nalci, Alican</creatorcontrib><creatorcontrib>Fedorov, Igor</creatorcontrib><creatorcontrib>Al-Shoukairi, Maher</creatorcontrib><creatorcontrib>Liu, Thomas T.</creatorcontrib><creatorcontrib>Rao, Bhaskar D.</creatorcontrib><title>Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem</title><title>IEEE transactions on signal processing</title><addtitle>TSP</addtitle><description>In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the rectified Gaussian scale mixture (R-GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student's t-distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the expectation-maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R-SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing, and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix.</description><subject>Bayes methods</subject><subject>Estimation</subject><subject>GSM</subject><subject>Magnetic resonance imaging</subject><subject>Matching pursuit algorithms</subject><subject>Mathematical model</subject><subject>Non-negative least squares</subject><subject>rectified Gaussian scale mixtures</subject><subject>Signal processing algorithms</subject><subject>sparse Bayesian learning</subject><subject>sparse signal recovery</subject><issn>1053-587X</issn><issn>1941-0476</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpVkctLxDAQh4Movu-Clxy9dE0ySZtcBBFfsD6wCt5CTKca6bZr0i7639tlF8HTzDDf_ObwEXLE2YRzZk6fy8eJYFxPhBZS6HyD7HIjecZkkW-OPVOQKV287pC9lD4Z41KafJvsgORaS4BdUj6h70MdsKLXbkgpuJaW3jVI78J3P0RM1LUV7T-QlnMXE9L7rs3u8d31YYF0ii71tPwa3JJ8jN1bg7MDslW7JuHhuu6Tl6vL54ubbPpwfXtxPs08GOizwgsQXHlkVQVMepYrzx3DSnutTA3AAJQSTEtRM2f0OHjlJSiR52BkDvvkbJU7H95mWHls--gaO49h5uKP7Vyw_zdt-LDv3cJqAVoqMQacrANi9zVg6u0sJI9N41rshmSFkrnRXBVLlK1QH7uUItZ_bzizSxd2dGGXLuzaxXhyvDoJiPiHawAhTQG_yuyDSw</recordid><startdate>20180615</startdate><enddate>20180615</enddate><creator>Nalci, Alican</creator><creator>Fedorov, Igor</creator><creator>Al-Shoukairi, Maher</creator><creator>Liu, Thomas T.</creator><creator>Rao, Bhaskar D.</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0002-1978-7486</orcidid><orcidid>https://orcid.org/0000-0002-5808-9947</orcidid><orcidid>https://orcid.org/0000-0002-7185-376X</orcidid><orcidid>https://orcid.org/0000-0002-8204-9515</orcidid></search><sort><creationdate>20180615</creationdate><title>Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem</title><author>Nalci, Alican ; Fedorov, Igor ; Al-Shoukairi, Maher ; Liu, Thomas T. ; Rao, Bhaskar D.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c393t-7c23215ce0dd304c065c1a0ed8c859f330335520842f0a98355c5c43526639463</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Bayes methods</topic><topic>Estimation</topic><topic>GSM</topic><topic>Magnetic resonance imaging</topic><topic>Matching pursuit algorithms</topic><topic>Mathematical model</topic><topic>Non-negative least squares</topic><topic>rectified Gaussian scale mixtures</topic><topic>Signal processing algorithms</topic><topic>sparse Bayesian learning</topic><topic>sparse signal recovery</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nalci, Alican</creatorcontrib><creatorcontrib>Fedorov, Igor</creatorcontrib><creatorcontrib>Al-Shoukairi, Maher</creatorcontrib><creatorcontrib>Liu, Thomas T.</creatorcontrib><creatorcontrib>Rao, Bhaskar D.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>IEEE transactions on signal processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nalci, Alican</au><au>Fedorov, Igor</au><au>Al-Shoukairi, Maher</au><au>Liu, Thomas T.</au><au>Rao, Bhaskar D.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem</atitle><jtitle>IEEE transactions on signal processing</jtitle><stitle>TSP</stitle><date>2018-06-15</date><risdate>2018</risdate><volume>66</volume><issue>12</issue><spage>3124</spage><epage>3139</epage><pages>3124-3139</pages><issn>1053-587X</issn><eissn>1941-0476</eissn><coden>ITPRED</coden><abstract>In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the rectified Gaussian scale mixture (R-GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student's t-distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the expectation-maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R-SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing, and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix.</abstract><pub>IEEE</pub><pmid>34188433</pmid><doi>10.1109/TSP.2018.2824286</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-1978-7486</orcidid><orcidid>https://orcid.org/0000-0002-5808-9947</orcidid><orcidid>https://orcid.org/0000-0002-7185-376X</orcidid><orcidid>https://orcid.org/0000-0002-8204-9515</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1053-587X
ispartof IEEE transactions on signal processing, 2018-06, Vol.66 (12), p.3124-3139
issn 1053-587X
1941-0476
language eng
recordid cdi_crossref_primary_10_1109_TSP_2018_2824286
source IEEE Electronic Library (IEL)
subjects Bayes methods
Estimation
GSM
Magnetic resonance imaging
Matching pursuit algorithms
Mathematical model
Non-negative least squares
rectified Gaussian scale mixtures
Signal processing algorithms
sparse Bayesian learning
sparse signal recovery
title Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T08%3A39%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Rectified%20Gaussian%20Scale%20Mixtures%20and%20the%20Sparse%20Non-Negative%20Least%20Squares%20Problem&rft.jtitle=IEEE%20transactions%20on%20signal%20processing&rft.au=Nalci,%20Alican&rft.date=2018-06-15&rft.volume=66&rft.issue=12&rft.spage=3124&rft.epage=3139&rft.pages=3124-3139&rft.issn=1053-587X&rft.eissn=1941-0476&rft.coden=ITPRED&rft_id=info:doi/10.1109/TSP.2018.2824286&rft_dat=%3Cproquest_RIE%3E2546981572%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2546981572&rft_id=info:pmid/34188433&rft_ieee_id=8332497&rfr_iscdi=true