Tensor Networks for Latent Variable Analysis: Novel Algorithms for Tensor Train Approximation

Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model that represents data as an ordered network of subtensors of order-2 or order-3 has, so far, not been widely co...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2020-11, Vol.31 (11), p.4622-4636
Hauptverfasser: Phan, Anh-Huy, Cichocki, Andrzej, Uschmajew, Andre, Tichavsky, Petr, Luta, George, Mandic, Danilo P.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4636
container_issue 11
container_start_page 4622
container_title IEEE transaction on neural networks and learning systems
container_volume 31
creator Phan, Anh-Huy
Cichocki, Andrzej
Uschmajew, Andre
Tichavsky, Petr
Luta, George
Mandic, Danilo P.
description Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model that represents data as an ordered network of subtensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network (TN) decomposition has been long studied in quantum physics and scientific computing. In this article, we present novel algorithms and applications of TN decompositions, with a particular focus on the tensor train (TT) decomposition and its variants. The novel algorithms developed for the TT decomposition update, in an alternating way, one or several core tensors at each iteration and exhibit enhanced mathematical tractability and scalability for large-scale data tensors. For rigor, the cases of the given ranks, given approximation error, and the given error bound are all considered. The proposed algorithms provide well-balanced TT-decompositions and are tested in the classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, achieving superior performance over the widely used truncated algorithms for TT decomposition.
doi_str_mv 10.1109/TNNLS.2019.2956926
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_32031950</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8984730</ieee_id><sourcerecordid>2352646872</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-18ef9d24e53a8e11ddce69fda74fce6d471a44d6ce0251f12c0d62b2448ed2ef3</originalsourceid><addsrcrecordid>eNpdkE1LAzEQhoMoKuofUJAFL15ak8nHJt6K-AWlHqziRZZ0M6ur201Ntn78e6OtPZhLJuSZd5iHkH1G-4xRczIejYa3faDM9MFIZUCtkW1gCnrAtV5f1fnDFtmL8YWmo6hUwmySLQ6UMyPpNnkcYxt9yEbYffjwGrMqPYa2w7bL7m2o7aTBbNDa5ivW8TQb-XdsskHz5EPdPU8X-DJiHGzdZoPZLPjPemq72re7ZKOyTcS95b1D7i7Ox2dXveHN5fXZYNgruWRdj2msjAOBkluNjDlXojKVs7moUuVEzqwQTpVIQbKKQUmdggkIodEBVnyHHC9y0-y3OcaumNaxxKaxLfp5LIBLUELpHBJ69A998fOQFkyUkDqXlCmRKFhQZfAxBqyKWUg7ha-C0eLHf_Hrv_jxXyz9p6bDZfR8MkW3avmznYCDBVAj4upbGy1yTvk3RSKKJA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2458750164</pqid></control><display><type>article</type><title>Tensor Networks for Latent Variable Analysis: Novel Algorithms for Tensor Train Approximation</title><source>IEEE Electronic Library (IEL)</source><creator>Phan, Anh-Huy ; Cichocki, Andrzej ; Uschmajew, Andre ; Tichavsky, Petr ; Luta, George ; Mandic, Danilo P.</creator><creatorcontrib>Phan, Anh-Huy ; Cichocki, Andrzej ; Uschmajew, Andre ; Tichavsky, Petr ; Luta, George ; Mandic, Danilo P.</creatorcontrib><description>Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model that represents data as an ordered network of subtensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network (TN) decomposition has been long studied in quantum physics and scientific computing. In this article, we present novel algorithms and applications of TN decompositions, with a particular focus on the tensor train (TT) decomposition and its variants. The novel algorithms developed for the TT decomposition update, in an alternating way, one or several core tensors at each iteration and exhibit enhanced mathematical tractability and scalability for large-scale data tensors. For rigor, the cases of the given ranks, given approximation error, and the given error bound are all considered. The proposed algorithms provide well-balanced TT-decompositions and are tested in the classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, achieving superior performance over the widely used truncated algorithms for TT decomposition.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2019.2956926</identifier><identifier>PMID: 32031950</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Approximation ; Approximation algorithms ; Approximation error ; Blind source separation ; Data models ; Decomposition ; Feature extraction ; image denoising ; Iterative methods ; Learning algorithms ; Machine learning ; Mathematical analysis ; Matrix decomposition ; Matrix methods ; nested Tucker ; Noise reduction ; Quantum theory ; Signal processing ; Signal processing algorithms ; tensor network (TN) ; tensor train (TT) decomposition ; tensorization ; Tensors ; truncated singular value decomposition (SVD) ; Tucker-2 (TK2) decomposition</subject><ispartof>IEEE transaction on neural networks and learning systems, 2020-11, Vol.31 (11), p.4622-4636</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-18ef9d24e53a8e11ddce69fda74fce6d471a44d6ce0251f12c0d62b2448ed2ef3</citedby><cites>FETCH-LOGICAL-c351t-18ef9d24e53a8e11ddce69fda74fce6d471a44d6ce0251f12c0d62b2448ed2ef3</cites><orcidid>0000-0002-4035-7632 ; 0000-0002-5509-7773 ; 0000-0003-0621-4766 ; 0000-0001-8432-3963</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8984730$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8984730$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32031950$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Phan, Anh-Huy</creatorcontrib><creatorcontrib>Cichocki, Andrzej</creatorcontrib><creatorcontrib>Uschmajew, Andre</creatorcontrib><creatorcontrib>Tichavsky, Petr</creatorcontrib><creatorcontrib>Luta, George</creatorcontrib><creatorcontrib>Mandic, Danilo P.</creatorcontrib><title>Tensor Networks for Latent Variable Analysis: Novel Algorithms for Tensor Train Approximation</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model that represents data as an ordered network of subtensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network (TN) decomposition has been long studied in quantum physics and scientific computing. In this article, we present novel algorithms and applications of TN decompositions, with a particular focus on the tensor train (TT) decomposition and its variants. The novel algorithms developed for the TT decomposition update, in an alternating way, one or several core tensors at each iteration and exhibit enhanced mathematical tractability and scalability for large-scale data tensors. For rigor, the cases of the given ranks, given approximation error, and the given error bound are all considered. The proposed algorithms provide well-balanced TT-decompositions and are tested in the classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, achieving superior performance over the widely used truncated algorithms for TT decomposition.</description><subject>Algorithms</subject><subject>Approximation</subject><subject>Approximation algorithms</subject><subject>Approximation error</subject><subject>Blind source separation</subject><subject>Data models</subject><subject>Decomposition</subject><subject>Feature extraction</subject><subject>image denoising</subject><subject>Iterative methods</subject><subject>Learning algorithms</subject><subject>Machine learning</subject><subject>Mathematical analysis</subject><subject>Matrix decomposition</subject><subject>Matrix methods</subject><subject>nested Tucker</subject><subject>Noise reduction</subject><subject>Quantum theory</subject><subject>Signal processing</subject><subject>Signal processing algorithms</subject><subject>tensor network (TN)</subject><subject>tensor train (TT) decomposition</subject><subject>tensorization</subject><subject>Tensors</subject><subject>truncated singular value decomposition (SVD)</subject><subject>Tucker-2 (TK2) decomposition</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1LAzEQhoMoKuofUJAFL15ak8nHJt6K-AWlHqziRZZ0M6ur201Ntn78e6OtPZhLJuSZd5iHkH1G-4xRczIejYa3faDM9MFIZUCtkW1gCnrAtV5f1fnDFtmL8YWmo6hUwmySLQ6UMyPpNnkcYxt9yEbYffjwGrMqPYa2w7bL7m2o7aTBbNDa5ivW8TQb-XdsskHz5EPdPU8X-DJiHGzdZoPZLPjPemq72re7ZKOyTcS95b1D7i7Ox2dXveHN5fXZYNgruWRdj2msjAOBkluNjDlXojKVs7moUuVEzqwQTpVIQbKKQUmdggkIodEBVnyHHC9y0-y3OcaumNaxxKaxLfp5LIBLUELpHBJ69A998fOQFkyUkDqXlCmRKFhQZfAxBqyKWUg7ha-C0eLHf_Hrv_jxXyz9p6bDZfR8MkW3avmznYCDBVAj4upbGy1yTvk3RSKKJA</recordid><startdate>20201101</startdate><enddate>20201101</enddate><creator>Phan, Anh-Huy</creator><creator>Cichocki, Andrzej</creator><creator>Uschmajew, Andre</creator><creator>Tichavsky, Petr</creator><creator>Luta, George</creator><creator>Mandic, Danilo P.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-4035-7632</orcidid><orcidid>https://orcid.org/0000-0002-5509-7773</orcidid><orcidid>https://orcid.org/0000-0003-0621-4766</orcidid><orcidid>https://orcid.org/0000-0001-8432-3963</orcidid></search><sort><creationdate>20201101</creationdate><title>Tensor Networks for Latent Variable Analysis: Novel Algorithms for Tensor Train Approximation</title><author>Phan, Anh-Huy ; Cichocki, Andrzej ; Uschmajew, Andre ; Tichavsky, Petr ; Luta, George ; Mandic, Danilo P.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-18ef9d24e53a8e11ddce69fda74fce6d471a44d6ce0251f12c0d62b2448ed2ef3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Approximation</topic><topic>Approximation algorithms</topic><topic>Approximation error</topic><topic>Blind source separation</topic><topic>Data models</topic><topic>Decomposition</topic><topic>Feature extraction</topic><topic>image denoising</topic><topic>Iterative methods</topic><topic>Learning algorithms</topic><topic>Machine learning</topic><topic>Mathematical analysis</topic><topic>Matrix decomposition</topic><topic>Matrix methods</topic><topic>nested Tucker</topic><topic>Noise reduction</topic><topic>Quantum theory</topic><topic>Signal processing</topic><topic>Signal processing algorithms</topic><topic>tensor network (TN)</topic><topic>tensor train (TT) decomposition</topic><topic>tensorization</topic><topic>Tensors</topic><topic>truncated singular value decomposition (SVD)</topic><topic>Tucker-2 (TK2) decomposition</topic><toplevel>online_resources</toplevel><creatorcontrib>Phan, Anh-Huy</creatorcontrib><creatorcontrib>Cichocki, Andrzej</creatorcontrib><creatorcontrib>Uschmajew, Andre</creatorcontrib><creatorcontrib>Tichavsky, Petr</creatorcontrib><creatorcontrib>Luta, George</creatorcontrib><creatorcontrib>Mandic, Danilo P.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Phan, Anh-Huy</au><au>Cichocki, Andrzej</au><au>Uschmajew, Andre</au><au>Tichavsky, Petr</au><au>Luta, George</au><au>Mandic, Danilo P.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Tensor Networks for Latent Variable Analysis: Novel Algorithms for Tensor Train Approximation</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2020-11-01</date><risdate>2020</risdate><volume>31</volume><issue>11</issue><spage>4622</spage><epage>4636</epage><pages>4622-4636</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model that represents data as an ordered network of subtensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network (TN) decomposition has been long studied in quantum physics and scientific computing. In this article, we present novel algorithms and applications of TN decompositions, with a particular focus on the tensor train (TT) decomposition and its variants. The novel algorithms developed for the TT decomposition update, in an alternating way, one or several core tensors at each iteration and exhibit enhanced mathematical tractability and scalability for large-scale data tensors. For rigor, the cases of the given ranks, given approximation error, and the given error bound are all considered. The proposed algorithms provide well-balanced TT-decompositions and are tested in the classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, achieving superior performance over the widely used truncated algorithms for TT decomposition.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>32031950</pmid><doi>10.1109/TNNLS.2019.2956926</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-4035-7632</orcidid><orcidid>https://orcid.org/0000-0002-5509-7773</orcidid><orcidid>https://orcid.org/0000-0003-0621-4766</orcidid><orcidid>https://orcid.org/0000-0001-8432-3963</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2020-11, Vol.31 (11), p.4622-4636
issn 2162-237X
2162-2388
language eng
recordid cdi_pubmed_primary_32031950
source IEEE Electronic Library (IEL)
subjects Algorithms
Approximation
Approximation algorithms
Approximation error
Blind source separation
Data models
Decomposition
Feature extraction
image denoising
Iterative methods
Learning algorithms
Machine learning
Mathematical analysis
Matrix decomposition
Matrix methods
nested Tucker
Noise reduction
Quantum theory
Signal processing
Signal processing algorithms
tensor network (TN)
tensor train (TT) decomposition
tensorization
Tensors
truncated singular value decomposition (SVD)
Tucker-2 (TK2) decomposition
title Tensor Networks for Latent Variable Analysis: Novel Algorithms for Tensor Train Approximation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T20%3A28%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Tensor%20Networks%20for%20Latent%20Variable%20Analysis:%20Novel%20Algorithms%20for%20Tensor%20Train%20Approximation&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Phan,%20Anh-Huy&rft.date=2020-11-01&rft.volume=31&rft.issue=11&rft.spage=4622&rft.epage=4636&rft.pages=4622-4636&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2019.2956926&rft_dat=%3Cproquest_RIE%3E2352646872%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2458750164&rft_id=info:pmid/32031950&rft_ieee_id=8984730&rfr_iscdi=true