Clustering by sparse orthogonal NMF and interpretable neural network

Employing differentiable reconstruction to interpret the clustering layers of neural networks presents a potent solution to the interpretability challenges encountered when applying neural networks within the multimedia sector. However, most of the existing approaches proposed for interpretability o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia systems 2023-12, Vol.29 (6), p.3341-3356
Hauptverfasser: Gai, Yongwei, Liu, Jinglei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3356
container_issue 6
container_start_page 3341
container_title Multimedia systems
container_volume 29
creator Gai, Yongwei
Liu, Jinglei
description Employing differentiable reconstruction to interpret the clustering layers of neural networks presents a potent solution to the interpretability challenges encountered when applying neural networks within the multimedia sector. However, most of the existing approaches proposed for interpretability of machine learning make post hoc interpretations of neural networks. While existing approaches have made progress in post hoc interpretation, they still fall short in providing clear explanations for black-box models. There are two main challenges: the first is how we should design an interpretable neural network using sparse orthogonal non-negative matrix factorization (NMF) instead of performing post hoc interpretation. The second is how to implement sparse orthogonal NMF and perform clustering using the designed neural network. To solve these two problems, we address this issue by designing a New Interpretable Neural Network (NINN) that uses sparse orthogonal NMF as a basis for its architecture. We innovate in three aspects: first, NINN is a transparent neural network model with greater interpretability in the clustering layer. Second, NINN automatically performs gradient-based optimization of the parameters in the program and has the advantage of end-to-end learning. Third, since NINN is a differentiable reformulation of sparse orthogonal NMF, NINN inherits many properties of both NMF and neural networks, such as parallel solving, online clustering, dimensionality reduction, and unsupervised learning. Experimental evidence demonstrates that the NINN outperforms a myriad of conventional clustering algorithms, such as k-means and NMF, as well as some of the latest models, in terms of clustering performance.
doi_str_mv 10.1007/s00530-023-01187-7
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2890829596</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2890829596</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-e9c0ade1ddb7d6a7744eafbfd63faf56df45946f8ebb568400d911b908907b153</originalsourceid><addsrcrecordid>eNp9kD1PwzAURS0EEqXwB5giMRueP2LHIyoUkAosMFt2bJeWEAc7Eeq_JyVIbExvuOdePR2EzglcEgB5lQFKBhgow0BIJbE8QDPCGcWkqughmoHiFHMl6DE6yXkLQKRgMEM3i2bIvU-bdl3YXZE7k7IvYurf4jq2pimeHpeFaV2xaUeqS743tvFF64c0hq3vv2J6P0VHwTTZn_3eOXpd3r4s7vHq-e5hcb3CNZXQY69qMM4T56x0wkjJuTfBBidYMKEULvBScREqb20pKg7gFCFWQaVAWlKyObqYdrsUPwefe72NQxq_zJqOTEVVqcRI0YmqU8w5-aC7tPkwaacJ6L0tPdnSoy39Y0vLscSmUu72Lnz6m_6n9Q3QP22D</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2890829596</pqid></control><display><type>article</type><title>Clustering by sparse orthogonal NMF and interpretable neural network</title><source>SpringerNature Journals</source><creator>Gai, Yongwei ; Liu, Jinglei</creator><creatorcontrib>Gai, Yongwei ; Liu, Jinglei</creatorcontrib><description>Employing differentiable reconstruction to interpret the clustering layers of neural networks presents a potent solution to the interpretability challenges encountered when applying neural networks within the multimedia sector. However, most of the existing approaches proposed for interpretability of machine learning make post hoc interpretations of neural networks. While existing approaches have made progress in post hoc interpretation, they still fall short in providing clear explanations for black-box models. There are two main challenges: the first is how we should design an interpretable neural network using sparse orthogonal non-negative matrix factorization (NMF) instead of performing post hoc interpretation. The second is how to implement sparse orthogonal NMF and perform clustering using the designed neural network. To solve these two problems, we address this issue by designing a New Interpretable Neural Network (NINN) that uses sparse orthogonal NMF as a basis for its architecture. We innovate in three aspects: first, NINN is a transparent neural network model with greater interpretability in the clustering layer. Second, NINN automatically performs gradient-based optimization of the parameters in the program and has the advantage of end-to-end learning. Third, since NINN is a differentiable reformulation of sparse orthogonal NMF, NINN inherits many properties of both NMF and neural networks, such as parallel solving, online clustering, dimensionality reduction, and unsupervised learning. Experimental evidence demonstrates that the NINN outperforms a myriad of conventional clustering algorithms, such as k-means and NMF, as well as some of the latest models, in terms of clustering performance.</description><identifier>ISSN: 0942-4962</identifier><identifier>EISSN: 1432-1882</identifier><identifier>DOI: 10.1007/s00530-023-01187-7</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Algorithms ; Clustering ; Computer Communication Networks ; Computer Graphics ; Computer Science ; Cryptology ; Data Storage Representation ; Machine learning ; Mathematical models ; Multimedia ; Multimedia Information Systems ; Neural networks ; Operating Systems ; Optimization ; Regular Paper ; Unsupervised learning</subject><ispartof>Multimedia systems, 2023-12, Vol.29 (6), p.3341-3356</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-e9c0ade1ddb7d6a7744eafbfd63faf56df45946f8ebb568400d911b908907b153</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00530-023-01187-7$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00530-023-01187-7$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>315,781,785,27928,27929,41492,42561,51323</link.rule.ids></links><search><creatorcontrib>Gai, Yongwei</creatorcontrib><creatorcontrib>Liu, Jinglei</creatorcontrib><title>Clustering by sparse orthogonal NMF and interpretable neural network</title><title>Multimedia systems</title><addtitle>Multimedia Systems</addtitle><description>Employing differentiable reconstruction to interpret the clustering layers of neural networks presents a potent solution to the interpretability challenges encountered when applying neural networks within the multimedia sector. However, most of the existing approaches proposed for interpretability of machine learning make post hoc interpretations of neural networks. While existing approaches have made progress in post hoc interpretation, they still fall short in providing clear explanations for black-box models. There are two main challenges: the first is how we should design an interpretable neural network using sparse orthogonal non-negative matrix factorization (NMF) instead of performing post hoc interpretation. The second is how to implement sparse orthogonal NMF and perform clustering using the designed neural network. To solve these two problems, we address this issue by designing a New Interpretable Neural Network (NINN) that uses sparse orthogonal NMF as a basis for its architecture. We innovate in three aspects: first, NINN is a transparent neural network model with greater interpretability in the clustering layer. Second, NINN automatically performs gradient-based optimization of the parameters in the program and has the advantage of end-to-end learning. Third, since NINN is a differentiable reformulation of sparse orthogonal NMF, NINN inherits many properties of both NMF and neural networks, such as parallel solving, online clustering, dimensionality reduction, and unsupervised learning. Experimental evidence demonstrates that the NINN outperforms a myriad of conventional clustering algorithms, such as k-means and NMF, as well as some of the latest models, in terms of clustering performance.</description><subject>Algorithms</subject><subject>Clustering</subject><subject>Computer Communication Networks</subject><subject>Computer Graphics</subject><subject>Computer Science</subject><subject>Cryptology</subject><subject>Data Storage Representation</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>Multimedia</subject><subject>Multimedia Information Systems</subject><subject>Neural networks</subject><subject>Operating Systems</subject><subject>Optimization</subject><subject>Regular Paper</subject><subject>Unsupervised learning</subject><issn>0942-4962</issn><issn>1432-1882</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kD1PwzAURS0EEqXwB5giMRueP2LHIyoUkAosMFt2bJeWEAc7Eeq_JyVIbExvuOdePR2EzglcEgB5lQFKBhgow0BIJbE8QDPCGcWkqughmoHiFHMl6DE6yXkLQKRgMEM3i2bIvU-bdl3YXZE7k7IvYurf4jq2pimeHpeFaV2xaUeqS743tvFF64c0hq3vv2J6P0VHwTTZn_3eOXpd3r4s7vHq-e5hcb3CNZXQY69qMM4T56x0wkjJuTfBBidYMKEULvBScREqb20pKg7gFCFWQaVAWlKyObqYdrsUPwefe72NQxq_zJqOTEVVqcRI0YmqU8w5-aC7tPkwaacJ6L0tPdnSoy39Y0vLscSmUu72Lnz6m_6n9Q3QP22D</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Gai, Yongwei</creator><creator>Liu, Jinglei</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20231201</creationdate><title>Clustering by sparse orthogonal NMF and interpretable neural network</title><author>Gai, Yongwei ; Liu, Jinglei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-e9c0ade1ddb7d6a7744eafbfd63faf56df45946f8ebb568400d911b908907b153</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Clustering</topic><topic>Computer Communication Networks</topic><topic>Computer Graphics</topic><topic>Computer Science</topic><topic>Cryptology</topic><topic>Data Storage Representation</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>Multimedia</topic><topic>Multimedia Information Systems</topic><topic>Neural networks</topic><topic>Operating Systems</topic><topic>Optimization</topic><topic>Regular Paper</topic><topic>Unsupervised learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gai, Yongwei</creatorcontrib><creatorcontrib>Liu, Jinglei</creatorcontrib><collection>CrossRef</collection><jtitle>Multimedia systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gai, Yongwei</au><au>Liu, Jinglei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Clustering by sparse orthogonal NMF and interpretable neural network</atitle><jtitle>Multimedia systems</jtitle><stitle>Multimedia Systems</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>29</volume><issue>6</issue><spage>3341</spage><epage>3356</epage><pages>3341-3356</pages><issn>0942-4962</issn><eissn>1432-1882</eissn><abstract>Employing differentiable reconstruction to interpret the clustering layers of neural networks presents a potent solution to the interpretability challenges encountered when applying neural networks within the multimedia sector. However, most of the existing approaches proposed for interpretability of machine learning make post hoc interpretations of neural networks. While existing approaches have made progress in post hoc interpretation, they still fall short in providing clear explanations for black-box models. There are two main challenges: the first is how we should design an interpretable neural network using sparse orthogonal non-negative matrix factorization (NMF) instead of performing post hoc interpretation. The second is how to implement sparse orthogonal NMF and perform clustering using the designed neural network. To solve these two problems, we address this issue by designing a New Interpretable Neural Network (NINN) that uses sparse orthogonal NMF as a basis for its architecture. We innovate in three aspects: first, NINN is a transparent neural network model with greater interpretability in the clustering layer. Second, NINN automatically performs gradient-based optimization of the parameters in the program and has the advantage of end-to-end learning. Third, since NINN is a differentiable reformulation of sparse orthogonal NMF, NINN inherits many properties of both NMF and neural networks, such as parallel solving, online clustering, dimensionality reduction, and unsupervised learning. Experimental evidence demonstrates that the NINN outperforms a myriad of conventional clustering algorithms, such as k-means and NMF, as well as some of the latest models, in terms of clustering performance.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00530-023-01187-7</doi><tpages>16</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0942-4962
ispartof Multimedia systems, 2023-12, Vol.29 (6), p.3341-3356
issn 0942-4962
1432-1882
language eng
recordid cdi_proquest_journals_2890829596
source SpringerNature Journals
subjects Algorithms
Clustering
Computer Communication Networks
Computer Graphics
Computer Science
Cryptology
Data Storage Representation
Machine learning
Mathematical models
Multimedia
Multimedia Information Systems
Neural networks
Operating Systems
Optimization
Regular Paper
Unsupervised learning
title Clustering by sparse orthogonal NMF and interpretable neural network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T01%3A22%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Clustering%20by%20sparse%20orthogonal%20NMF%20and%20interpretable%20neural%20network&rft.jtitle=Multimedia%20systems&rft.au=Gai,%20Yongwei&rft.date=2023-12-01&rft.volume=29&rft.issue=6&rft.spage=3341&rft.epage=3356&rft.pages=3341-3356&rft.issn=0942-4962&rft.eissn=1432-1882&rft_id=info:doi/10.1007/s00530-023-01187-7&rft_dat=%3Cproquest_cross%3E2890829596%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2890829596&rft_id=info:pmid/&rfr_iscdi=true