Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation
The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algor...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2013-06 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Cho-Jui Hsieh Sustik, Matyas A Dhillon, Inderjit S Ravikumar, Pradeep |
description | The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2085086265</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2085086265</sourcerecordid><originalsourceid>FETCH-proquest_journals_20850862653</originalsourceid><addsrcrecordid>eNqNitEKgjAUQEcQJOU_DHoW1l0zX0OMfOghqme52IpJbbY7xc_PwA_o6cA5Z8YikHKTZFuABYuJGiEEpDtQSkasvLToSfPS9vrH3PXoDdpa8xMGbwZeUDBvDMZZfiNjn_zc4d2Poub7tvVumOqKzR_4Ih1PXLL1objmx2R8Pp2mUDWu83ZMFYhMiSyFVMn_ri-7pjzo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2085086265</pqid></control><display><type>article</type><title>Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation</title><source>Free E- Journals</source><creator>Cho-Jui Hsieh ; Sustik, Matyas A ; Dhillon, Inderjit S ; Ravikumar, Pradeep</creator><creatorcontrib>Cho-Jui Hsieh ; Sustik, Matyas A ; Dhillon, Inderjit S ; Ravikumar, Pradeep</creatorcontrib><description>The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Approximation ; Covariance matrix ; Economic models ; Markov processes ; Mathematical analysis ; Maximum likelihood estimators ; Newton methods ; Optimization ; Statistical methods</subject><ispartof>arXiv.org, 2013-06</ispartof><rights>2013. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Cho-Jui Hsieh</creatorcontrib><creatorcontrib>Sustik, Matyas A</creatorcontrib><creatorcontrib>Dhillon, Inderjit S</creatorcontrib><creatorcontrib>Ravikumar, Pradeep</creatorcontrib><title>Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation</title><title>arXiv.org</title><description>The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.</description><subject>Algorithms</subject><subject>Approximation</subject><subject>Covariance matrix</subject><subject>Economic models</subject><subject>Markov processes</subject><subject>Mathematical analysis</subject><subject>Maximum likelihood estimators</subject><subject>Newton methods</subject><subject>Optimization</subject><subject>Statistical methods</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNitEKgjAUQEcQJOU_DHoW1l0zX0OMfOghqme52IpJbbY7xc_PwA_o6cA5Z8YikHKTZFuABYuJGiEEpDtQSkasvLToSfPS9vrH3PXoDdpa8xMGbwZeUDBvDMZZfiNjn_zc4d2Poub7tvVumOqKzR_4Ih1PXLL1objmx2R8Pp2mUDWu83ZMFYhMiSyFVMn_ri-7pjzo</recordid><startdate>20130613</startdate><enddate>20130613</enddate><creator>Cho-Jui Hsieh</creator><creator>Sustik, Matyas A</creator><creator>Dhillon, Inderjit S</creator><creator>Ravikumar, Pradeep</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20130613</creationdate><title>Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation</title><author>Cho-Jui Hsieh ; Sustik, Matyas A ; Dhillon, Inderjit S ; Ravikumar, Pradeep</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20850862653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Algorithms</topic><topic>Approximation</topic><topic>Covariance matrix</topic><topic>Economic models</topic><topic>Markov processes</topic><topic>Mathematical analysis</topic><topic>Maximum likelihood estimators</topic><topic>Newton methods</topic><topic>Optimization</topic><topic>Statistical methods</topic><toplevel>online_resources</toplevel><creatorcontrib>Cho-Jui Hsieh</creatorcontrib><creatorcontrib>Sustik, Matyas A</creatorcontrib><creatorcontrib>Dhillon, Inderjit S</creatorcontrib><creatorcontrib>Ravikumar, Pradeep</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cho-Jui Hsieh</au><au>Sustik, Matyas A</au><au>Dhillon, Inderjit S</au><au>Ravikumar, Pradeep</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation</atitle><jtitle>arXiv.org</jtitle><date>2013-06-13</date><risdate>2013</risdate><eissn>2331-8422</eissn><abstract>The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2013-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2085086265 |
source | Free E- Journals |
subjects | Algorithms Approximation Covariance matrix Economic models Markov processes Mathematical analysis Maximum likelihood estimators Newton methods Optimization Statistical methods |
title | Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T05%3A36%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Sparse%20Inverse%20Covariance%20Matrix%20Estimation%20Using%20Quadratic%20Approximation&rft.jtitle=arXiv.org&rft.au=Cho-Jui%20Hsieh&rft.date=2013-06-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2085086265%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2085086265&rft_id=info:pmid/&rfr_iscdi=true |