Leave-One-Out Approach for Matrix Completion: Primal and Dual Analysis
In this paper, we introduce a powerful technique based on Leave-One-Out analysis to the study of low-rank matrix completion problems. Using this technique, we develop a general approach for obtaining fine-grained, entrywise bounds for iterative stochastic procedures in the presence of probabilistic...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on information theory 2020-11, Vol.66 (11), p.7274-7301 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 7301 |
---|---|
container_issue | 11 |
container_start_page | 7274 |
container_title | IEEE transactions on information theory |
container_volume | 66 |
creator | Ding, Lijun Chen, Yudong |
description | In this paper, we introduce a powerful technique based on Leave-One-Out analysis to the study of low-rank matrix completion problems. Using this technique, we develop a general approach for obtaining fine-grained, entrywise bounds for iterative stochastic procedures in the presence of probabilistic dependency. We demonstrate the power of this approach in analyzing two of the most important algorithms for matrix completion: (i) the non-convex approach based on Projected Gradient Descent (PGD) for a rank-constrained formulation, also known as the Singular Value Projection algorithm, and (ii) the convex relaxation approach based on nuclear norm minimization (NNM). Using this approach, we establish the first convergence guarantee for the original form of PGD without regularization or sample splitting , and in particular shows that it converges linearly in the infinity norm . For NNM, we use this approach to study a fictitious iterative procedure that arises in the dual analysis . Our results show that NNM recovers an d -by- d rank- r matrix with \mathcal {O}(\mu r \log (\mu r) d\log d) observed entries. This bound has optimal dependence on the matrix dimension and is independent of the condition number. To the best of our knowledge, none of previous sample complexity results for tractable matrix completion algorithms satisfies these two properties simultaneously. |
doi_str_mv | 10.1109/TIT.2020.2992769 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9087910</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9087910</ieee_id><sourcerecordid>2453817197</sourcerecordid><originalsourceid>FETCH-LOGICAL-c333t-d1b6553467f0081748e1f9c628645e89dbfb68bf2716c053ed6064dbe4529eed3</originalsourceid><addsrcrecordid>eNo9kEtLw0AUhQdRsD72gpuA69R5P9yVarVQqYu6HibJHUxJkziTiP33Tmlxcbn3wjmHw4fQHcFTQrB53Cw3U4opnlJjqJLmDE2IECo3UvBzNMGY6Nxwri_RVYzb9HJB6AQtVuB-IF-3acYhm_V96Fz5lfkuZO9uCPVvNu92fQND3bVP2Ueod67JXFtlz2M6Zq1r9rGON-jCuybC7Wlfo8_Fy2b-lq_Wr8v5bJWXjLEhr0ghhWBcKo-xJoprIN6UkmrJBWhTFb6QuvBUEVliwaCSWPKqAC6oAajYNXo45qaa3yPEwW67MaQS0VIuWIokRiUVPqrK0MUYwNv-0DvsLcH2QMsmWvZAy55oJcv90VIDwL_cYK0MwewPbUNkHA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2453817197</pqid></control><display><type>article</type><title>Leave-One-Out Approach for Matrix Completion: Primal and Dual Analysis</title><source>IEEE Electronic Library (IEL)</source><creator>Ding, Lijun ; Chen, Yudong</creator><creatorcontrib>Ding, Lijun ; Chen, Yudong</creatorcontrib><description><![CDATA[In this paper, we introduce a powerful technique based on Leave-One-Out analysis to the study of low-rank matrix completion problems. Using this technique, we develop a general approach for obtaining fine-grained, entrywise bounds for iterative stochastic procedures in the presence of probabilistic dependency. We demonstrate the power of this approach in analyzing two of the most important algorithms for matrix completion: (i) the non-convex approach based on Projected Gradient Descent (PGD) for a rank-constrained formulation, also known as the Singular Value Projection algorithm, and (ii) the convex relaxation approach based on nuclear norm minimization (NNM). Using this approach, we establish the first convergence guarantee for the original form of PGD without regularization or sample splitting , and in particular shows that it converges linearly in the infinity norm . For NNM, we use this approach to study a fictitious iterative procedure that arises in the dual analysis . Our results show that NNM recovers an <inline-formula> <tex-math notation="LaTeX">d </tex-math></inline-formula>-by- <inline-formula> <tex-math notation="LaTeX">d </tex-math></inline-formula> rank- <inline-formula> <tex-math notation="LaTeX">r </tex-math></inline-formula> matrix with <inline-formula> <tex-math notation="LaTeX">\mathcal {O}(\mu r \log (\mu r) d\log d) </tex-math></inline-formula> observed entries. This bound has optimal dependence on the matrix dimension and is independent of the condition number. To the best of our knowledge, none of previous sample complexity results for tractable matrix completion algorithms satisfies these two properties simultaneously.]]></description><identifier>ISSN: 0018-9448</identifier><identifier>EISSN: 1557-9654</identifier><identifier>DOI: 10.1109/TIT.2020.2992769</identifier><identifier>CODEN: IETTAW</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Approximation algorithms ; Complexity theory ; Convergence ; Dependence ; Iterative methods ; leave-one-out ; Matrix completion ; Minimization ; Optimization ; Probabilistic logic ; Regularization ; Relaxation methods ; statistical learning</subject><ispartof>IEEE transactions on information theory, 2020-11, Vol.66 (11), p.7274-7301</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c333t-d1b6553467f0081748e1f9c628645e89dbfb68bf2716c053ed6064dbe4529eed3</citedby><cites>FETCH-LOGICAL-c333t-d1b6553467f0081748e1f9c628645e89dbfb68bf2716c053ed6064dbe4529eed3</cites><orcidid>0000-0002-6416-5635 ; 0000-0003-2667-3337</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9087910$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9087910$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ding, Lijun</creatorcontrib><creatorcontrib>Chen, Yudong</creatorcontrib><title>Leave-One-Out Approach for Matrix Completion: Primal and Dual Analysis</title><title>IEEE transactions on information theory</title><addtitle>TIT</addtitle><description><![CDATA[In this paper, we introduce a powerful technique based on Leave-One-Out analysis to the study of low-rank matrix completion problems. Using this technique, we develop a general approach for obtaining fine-grained, entrywise bounds for iterative stochastic procedures in the presence of probabilistic dependency. We demonstrate the power of this approach in analyzing two of the most important algorithms for matrix completion: (i) the non-convex approach based on Projected Gradient Descent (PGD) for a rank-constrained formulation, also known as the Singular Value Projection algorithm, and (ii) the convex relaxation approach based on nuclear norm minimization (NNM). Using this approach, we establish the first convergence guarantee for the original form of PGD without regularization or sample splitting , and in particular shows that it converges linearly in the infinity norm . For NNM, we use this approach to study a fictitious iterative procedure that arises in the dual analysis . Our results show that NNM recovers an <inline-formula> <tex-math notation="LaTeX">d </tex-math></inline-formula>-by- <inline-formula> <tex-math notation="LaTeX">d </tex-math></inline-formula> rank- <inline-formula> <tex-math notation="LaTeX">r </tex-math></inline-formula> matrix with <inline-formula> <tex-math notation="LaTeX">\mathcal {O}(\mu r \log (\mu r) d\log d) </tex-math></inline-formula> observed entries. This bound has optimal dependence on the matrix dimension and is independent of the condition number. To the best of our knowledge, none of previous sample complexity results for tractable matrix completion algorithms satisfies these two properties simultaneously.]]></description><subject>Algorithms</subject><subject>Approximation algorithms</subject><subject>Complexity theory</subject><subject>Convergence</subject><subject>Dependence</subject><subject>Iterative methods</subject><subject>leave-one-out</subject><subject>Matrix completion</subject><subject>Minimization</subject><subject>Optimization</subject><subject>Probabilistic logic</subject><subject>Regularization</subject><subject>Relaxation methods</subject><subject>statistical learning</subject><issn>0018-9448</issn><issn>1557-9654</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kEtLw0AUhQdRsD72gpuA69R5P9yVarVQqYu6HibJHUxJkziTiP33Tmlxcbn3wjmHw4fQHcFTQrB53Cw3U4opnlJjqJLmDE2IECo3UvBzNMGY6Nxwri_RVYzb9HJB6AQtVuB-IF-3acYhm_V96Fz5lfkuZO9uCPVvNu92fQND3bVP2Ueod67JXFtlz2M6Zq1r9rGON-jCuybC7Wlfo8_Fy2b-lq_Wr8v5bJWXjLEhr0ghhWBcKo-xJoprIN6UkmrJBWhTFb6QuvBUEVliwaCSWPKqAC6oAajYNXo45qaa3yPEwW67MaQS0VIuWIokRiUVPqrK0MUYwNv-0DvsLcH2QMsmWvZAy55oJcv90VIDwL_cYK0MwewPbUNkHA</recordid><startdate>20201101</startdate><enddate>20201101</enddate><creator>Ding, Lijun</creator><creator>Chen, Yudong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-6416-5635</orcidid><orcidid>https://orcid.org/0000-0003-2667-3337</orcidid></search><sort><creationdate>20201101</creationdate><title>Leave-One-Out Approach for Matrix Completion: Primal and Dual Analysis</title><author>Ding, Lijun ; Chen, Yudong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c333t-d1b6553467f0081748e1f9c628645e89dbfb68bf2716c053ed6064dbe4529eed3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Approximation algorithms</topic><topic>Complexity theory</topic><topic>Convergence</topic><topic>Dependence</topic><topic>Iterative methods</topic><topic>leave-one-out</topic><topic>Matrix completion</topic><topic>Minimization</topic><topic>Optimization</topic><topic>Probabilistic logic</topic><topic>Regularization</topic><topic>Relaxation methods</topic><topic>statistical learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ding, Lijun</creatorcontrib><creatorcontrib>Chen, Yudong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on information theory</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ding, Lijun</au><au>Chen, Yudong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Leave-One-Out Approach for Matrix Completion: Primal and Dual Analysis</atitle><jtitle>IEEE transactions on information theory</jtitle><stitle>TIT</stitle><date>2020-11-01</date><risdate>2020</risdate><volume>66</volume><issue>11</issue><spage>7274</spage><epage>7301</epage><pages>7274-7301</pages><issn>0018-9448</issn><eissn>1557-9654</eissn><coden>IETTAW</coden><abstract><![CDATA[In this paper, we introduce a powerful technique based on Leave-One-Out analysis to the study of low-rank matrix completion problems. Using this technique, we develop a general approach for obtaining fine-grained, entrywise bounds for iterative stochastic procedures in the presence of probabilistic dependency. We demonstrate the power of this approach in analyzing two of the most important algorithms for matrix completion: (i) the non-convex approach based on Projected Gradient Descent (PGD) for a rank-constrained formulation, also known as the Singular Value Projection algorithm, and (ii) the convex relaxation approach based on nuclear norm minimization (NNM). Using this approach, we establish the first convergence guarantee for the original form of PGD without regularization or sample splitting , and in particular shows that it converges linearly in the infinity norm . For NNM, we use this approach to study a fictitious iterative procedure that arises in the dual analysis . Our results show that NNM recovers an <inline-formula> <tex-math notation="LaTeX">d </tex-math></inline-formula>-by- <inline-formula> <tex-math notation="LaTeX">d </tex-math></inline-formula> rank- <inline-formula> <tex-math notation="LaTeX">r </tex-math></inline-formula> matrix with <inline-formula> <tex-math notation="LaTeX">\mathcal {O}(\mu r \log (\mu r) d\log d) </tex-math></inline-formula> observed entries. This bound has optimal dependence on the matrix dimension and is independent of the condition number. To the best of our knowledge, none of previous sample complexity results for tractable matrix completion algorithms satisfies these two properties simultaneously.]]></abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIT.2020.2992769</doi><tpages>28</tpages><orcidid>https://orcid.org/0000-0002-6416-5635</orcidid><orcidid>https://orcid.org/0000-0003-2667-3337</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0018-9448 |
ispartof | IEEE transactions on information theory, 2020-11, Vol.66 (11), p.7274-7301 |
issn | 0018-9448 1557-9654 |
language | eng |
recordid | cdi_ieee_primary_9087910 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Approximation algorithms Complexity theory Convergence Dependence Iterative methods leave-one-out Matrix completion Minimization Optimization Probabilistic logic Regularization Relaxation methods statistical learning |
title | Leave-One-Out Approach for Matrix Completion: Primal and Dual Analysis |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T15%3A14%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Leave-One-Out%20Approach%20for%20Matrix%20Completion:%20Primal%20and%20Dual%20Analysis&rft.jtitle=IEEE%20transactions%20on%20information%20theory&rft.au=Ding,%20Lijun&rft.date=2020-11-01&rft.volume=66&rft.issue=11&rft.spage=7274&rft.epage=7301&rft.pages=7274-7301&rft.issn=0018-9448&rft.eissn=1557-9654&rft.coden=IETTAW&rft_id=info:doi/10.1109/TIT.2020.2992769&rft_dat=%3Cproquest_RIE%3E2453817197%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2453817197&rft_id=info:pmid/&rft_ieee_id=9087910&rfr_iscdi=true |