Nonredundant sparse feature extraction using autoencoders with receptive fields clustering

This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereb...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2017-09, Vol.93, p.99-109
Hauptverfasser: Ayinde, Babajide O., Zurada, Jacek M.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 109
container_issue
container_start_page 99
container_title Neural networks
container_volume 93
creator Ayinde, Babajide O.
Zurada, Jacek M.
description This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereby leading to extraction of similar features, thus resulting in filtering redundancy. We propose a way to address this problem and show that such redundancy can be eliminated. This yields smaller networks and produces unique receptive fields that extract distinct features. It is also shown that autoencoders with nonnegativity constraints on weights are capable of extracting fewer redundant features than conventional sparse autoencoders. The concept is illustrated using conventional sparse autoencoder and nonnegativity-constrained autoencoders with MNIST digits recognition, NORB normalized-uniform object data and Yale face dataset.
doi_str_mv 10.1016/j.neunet.2017.04.012
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1903437380</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608017300928</els_id><sourcerecordid>1903437380</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-5fd7c2927c52ad60dd1320a40e20e296299bf5550609c7afe57bc5561fdb53783</originalsourceid><addsrcrecordid>eNp9kE1P3DAQhi3UCraUf4CqHHtJOnbi2L5UQgjaSggucOFiee0JeJV1tv6g8O9rtJRjpZHm8rzzah5CTil0FOj4bdMFLAFzx4CKDoYOKDsgKyqFapmQ7ANZgVR9O4KEI_IppQ0AjHLoD8kRk5wzDmpF7q-XENGV4EzITdqZmLCZ0OQSscHnHI3NfglNST48NKbkBYNdHMbU_PH5sYlocZf9Uw15nF1q7FxSxljpz-TjZOaEJ2_7mNxdXtye_2yvbn78Oj-7au0AMrd8csIyxYTlzLgRnKM9AzMAsjpqZEqtJ845jKCsMBNysbacj3Rya94L2R-Tr_u7u7j8Lpiy3vpkcZ5NwKUkTRX0Qy96CRUd9qiNS0oRJ72Lfmvii6agX63qjd5b1a9WNQy6Wq2xL28NZb1F9x76p7EC3_cA1j-fPEadrK-i0PkqKGu3-P83_AV16IxV</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1903437380</pqid></control><display><type>article</type><title>Nonredundant sparse feature extraction using autoencoders with receptive fields clustering</title><source>MEDLINE</source><source>Elsevier ScienceDirect Journals Complete</source><creator>Ayinde, Babajide O. ; Zurada, Jacek M.</creator><creatorcontrib>Ayinde, Babajide O. ; Zurada, Jacek M.</creatorcontrib><description>This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereby leading to extraction of similar features, thus resulting in filtering redundancy. We propose a way to address this problem and show that such redundancy can be eliminated. This yields smaller networks and produces unique receptive fields that extract distinct features. It is also shown that autoencoders with nonnegativity constraints on weights are capable of extracting fewer redundant features than conventional sparse autoencoders. The concept is illustrated using conventional sparse autoencoder and nonnegativity-constrained autoencoders with MNIST digits recognition, NORB normalized-uniform object data and Yale face dataset.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2017.04.012</identifier><identifier>PMID: 28552509</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Agglomerative clustering ; Autoencoder ; Cluster Analysis ; Databases, Factual ; Deep learning ; Filter clustering ; Information Storage and Retrieval - methods ; Learning ; Pattern Recognition, Automated - methods ; Receptive fields</subject><ispartof>Neural networks, 2017-09, Vol.93, p.99-109</ispartof><rights>2017 Elsevier Ltd</rights><rights>Copyright © 2017 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-5fd7c2927c52ad60dd1320a40e20e296299bf5550609c7afe57bc5561fdb53783</citedby><cites>FETCH-LOGICAL-c408t-5fd7c2927c52ad60dd1320a40e20e296299bf5550609c7afe57bc5561fdb53783</cites><orcidid>0000-0001-7341-8799 ; 0000-0001-6622-534X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.neunet.2017.04.012$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/28552509$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ayinde, Babajide O.</creatorcontrib><creatorcontrib>Zurada, Jacek M.</creatorcontrib><title>Nonredundant sparse feature extraction using autoencoders with receptive fields clustering</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereby leading to extraction of similar features, thus resulting in filtering redundancy. We propose a way to address this problem and show that such redundancy can be eliminated. This yields smaller networks and produces unique receptive fields that extract distinct features. It is also shown that autoencoders with nonnegativity constraints on weights are capable of extracting fewer redundant features than conventional sparse autoencoders. The concept is illustrated using conventional sparse autoencoder and nonnegativity-constrained autoencoders with MNIST digits recognition, NORB normalized-uniform object data and Yale face dataset.</description><subject>Agglomerative clustering</subject><subject>Autoencoder</subject><subject>Cluster Analysis</subject><subject>Databases, Factual</subject><subject>Deep learning</subject><subject>Filter clustering</subject><subject>Information Storage and Retrieval - methods</subject><subject>Learning</subject><subject>Pattern Recognition, Automated - methods</subject><subject>Receptive fields</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kE1P3DAQhi3UCraUf4CqHHtJOnbi2L5UQgjaSggucOFiee0JeJV1tv6g8O9rtJRjpZHm8rzzah5CTil0FOj4bdMFLAFzx4CKDoYOKDsgKyqFapmQ7ANZgVR9O4KEI_IppQ0AjHLoD8kRk5wzDmpF7q-XENGV4EzITdqZmLCZ0OQSscHnHI3NfglNST48NKbkBYNdHMbU_PH5sYlocZf9Uw15nF1q7FxSxljpz-TjZOaEJ2_7mNxdXtye_2yvbn78Oj-7au0AMrd8csIyxYTlzLgRnKM9AzMAsjpqZEqtJ845jKCsMBNysbacj3Rya94L2R-Tr_u7u7j8Lpiy3vpkcZ5NwKUkTRX0Qy96CRUd9qiNS0oRJ72Lfmvii6agX63qjd5b1a9WNQy6Wq2xL28NZb1F9x76p7EC3_cA1j-fPEadrK-i0PkqKGu3-P83_AV16IxV</recordid><startdate>201709</startdate><enddate>201709</enddate><creator>Ayinde, Babajide O.</creator><creator>Zurada, Jacek M.</creator><general>Elsevier Ltd</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-7341-8799</orcidid><orcidid>https://orcid.org/0000-0001-6622-534X</orcidid></search><sort><creationdate>201709</creationdate><title>Nonredundant sparse feature extraction using autoencoders with receptive fields clustering</title><author>Ayinde, Babajide O. ; Zurada, Jacek M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-5fd7c2927c52ad60dd1320a40e20e296299bf5550609c7afe57bc5561fdb53783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Agglomerative clustering</topic><topic>Autoencoder</topic><topic>Cluster Analysis</topic><topic>Databases, Factual</topic><topic>Deep learning</topic><topic>Filter clustering</topic><topic>Information Storage and Retrieval - methods</topic><topic>Learning</topic><topic>Pattern Recognition, Automated - methods</topic><topic>Receptive fields</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ayinde, Babajide O.</creatorcontrib><creatorcontrib>Zurada, Jacek M.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ayinde, Babajide O.</au><au>Zurada, Jacek M.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Nonredundant sparse feature extraction using autoencoders with receptive fields clustering</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2017-09</date><risdate>2017</risdate><volume>93</volume><spage>99</spage><epage>109</epage><pages>99-109</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereby leading to extraction of similar features, thus resulting in filtering redundancy. We propose a way to address this problem and show that such redundancy can be eliminated. This yields smaller networks and produces unique receptive fields that extract distinct features. It is also shown that autoencoders with nonnegativity constraints on weights are capable of extracting fewer redundant features than conventional sparse autoencoders. The concept is illustrated using conventional sparse autoencoder and nonnegativity-constrained autoencoders with MNIST digits recognition, NORB normalized-uniform object data and Yale face dataset.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>28552509</pmid><doi>10.1016/j.neunet.2017.04.012</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0001-7341-8799</orcidid><orcidid>https://orcid.org/0000-0001-6622-534X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2017-09, Vol.93, p.99-109
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_1903437380
source MEDLINE; Elsevier ScienceDirect Journals Complete
subjects Agglomerative clustering
Autoencoder
Cluster Analysis
Databases, Factual
Deep learning
Filter clustering
Information Storage and Retrieval - methods
Learning
Pattern Recognition, Automated - methods
Receptive fields
title Nonredundant sparse feature extraction using autoencoders with receptive fields clustering
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T06%3A22%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Nonredundant%20sparse%20feature%20extraction%20using%20autoencoders%20with%20receptive%20fields%20clustering&rft.jtitle=Neural%20networks&rft.au=Ayinde,%20Babajide%20O.&rft.date=2017-09&rft.volume=93&rft.spage=99&rft.epage=109&rft.pages=99-109&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2017.04.012&rft_dat=%3Cproquest_cross%3E1903437380%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1903437380&rft_id=info:pmid/28552509&rft_els_id=S0893608017300928&rfr_iscdi=true