A Multiobjective Sparse Feature Learning Model for Deep Neural Networks
Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2015-12, Vol.26 (12), p.3263-3277 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 3277 |
---|---|
container_issue | 12 |
container_start_page | 3263 |
container_title | IEEE transaction on neural networks and learning systems |
container_volume | 26 |
creator | Gong, Maoguo Liu, Jia Li, Hao Cai, Qing Su, Linzhi |
description | Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features. |
doi_str_mv | 10.1109/TNNLS.2015.2469673 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_1736416805</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7230284</ieee_id><sourcerecordid>3883600321</sourcerecordid><originalsourceid>FETCH-LOGICAL-c470t-439c4b1ac9376a17a914d5806cc318aa7368a785b9cb57a909bf0910df880fe3</originalsourceid><addsrcrecordid>eNpdkE1PwkAQhjdGIwT5A5qYJl68FPer-3EkKmgCeICDt2a7nZpioXW31fjvXQQ5OJeZZJ55M3kQuiR4RAjWd6vFYrYcUUySEeVCC8lOUJ8SQWPKlDo9zvK1h4ber3EogRPB9TnqUcE4lhr30XQczbuqLetsDbYtPyFaNsZ5iCZg2s5BNAPjtuX2LZrXOVRRUbvoAaCJFtA5U4XWftXu3V-gs8JUHoaHPkCryePq_imevUyf78ez2HKJ25gzbXlGjNVMCkOk0YTnicLCWkaUMZIJZaRKMm2zJGyxzgqsCc4LpXABbIBu97GNqz868G26Kb2FqjJbqDufkhDAiVA4CejNP3Rdd24bnttRSjFOFAsU3VPW1d47KNLGlRvjvlOC053o9Fd0uhOdHkSHo-tDdJdtID-e_GkNwNUeKAHguJaUYao4-wE54H_I</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1738834183</pqid></control><display><type>article</type><title>A Multiobjective Sparse Feature Learning Model for Deep Neural Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Gong, Maoguo ; Liu, Jia ; Li, Hao ; Cai, Qing ; Su, Linzhi</creator><creatorcontrib>Gong, Maoguo ; Liu, Jia ; Li, Hao ; Cai, Qing ; Su, Linzhi</creatorcontrib><description>Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2015.2469673</identifier><identifier>PMID: 26340790</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Artificial Intelligence ; Brain modeling ; Computer Simulation ; Deep neural networks ; evolutionary algorithm ; Evolutionary computation ; Feature extraction ; Humans ; Learning - physiology ; Linear programming ; Models, Theoretical ; multiobjective optimization ; Neural networks ; Neural Networks (Computer) ; Pareto optimization ; sparse feature learning ; Sparsity</subject><ispartof>IEEE transaction on neural networks and learning systems, 2015-12, Vol.26 (12), p.3263-3277</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Dec 2015</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c470t-439c4b1ac9376a17a914d5806cc318aa7368a785b9cb57a909bf0910df880fe3</citedby><cites>FETCH-LOGICAL-c470t-439c4b1ac9376a17a914d5806cc318aa7368a785b9cb57a909bf0910df880fe3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7230284$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7230284$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/26340790$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Gong, Maoguo</creatorcontrib><creatorcontrib>Liu, Jia</creatorcontrib><creatorcontrib>Li, Hao</creatorcontrib><creatorcontrib>Cai, Qing</creatorcontrib><creatorcontrib>Su, Linzhi</creatorcontrib><title>A Multiobjective Sparse Feature Learning Model for Deep Neural Networks</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Brain modeling</subject><subject>Computer Simulation</subject><subject>Deep neural networks</subject><subject>evolutionary algorithm</subject><subject>Evolutionary computation</subject><subject>Feature extraction</subject><subject>Humans</subject><subject>Learning - physiology</subject><subject>Linear programming</subject><subject>Models, Theoretical</subject><subject>multiobjective optimization</subject><subject>Neural networks</subject><subject>Neural Networks (Computer)</subject><subject>Pareto optimization</subject><subject>sparse feature learning</subject><subject>Sparsity</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpdkE1PwkAQhjdGIwT5A5qYJl68FPer-3EkKmgCeICDt2a7nZpioXW31fjvXQQ5OJeZZJ55M3kQuiR4RAjWd6vFYrYcUUySEeVCC8lOUJ8SQWPKlDo9zvK1h4ber3EogRPB9TnqUcE4lhr30XQczbuqLetsDbYtPyFaNsZ5iCZg2s5BNAPjtuX2LZrXOVRRUbvoAaCJFtA5U4XWftXu3V-gs8JUHoaHPkCryePq_imevUyf78ez2HKJ25gzbXlGjNVMCkOk0YTnicLCWkaUMZIJZaRKMm2zJGyxzgqsCc4LpXABbIBu97GNqz868G26Kb2FqjJbqDufkhDAiVA4CejNP3Rdd24bnttRSjFOFAsU3VPW1d47KNLGlRvjvlOC053o9Fd0uhOdHkSHo-tDdJdtID-e_GkNwNUeKAHguJaUYao4-wE54H_I</recordid><startdate>20151201</startdate><enddate>20151201</enddate><creator>Gong, Maoguo</creator><creator>Liu, Jia</creator><creator>Li, Hao</creator><creator>Cai, Qing</creator><creator>Su, Linzhi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope></search><sort><creationdate>20151201</creationdate><title>A Multiobjective Sparse Feature Learning Model for Deep Neural Networks</title><author>Gong, Maoguo ; Liu, Jia ; Li, Hao ; Cai, Qing ; Su, Linzhi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c470t-439c4b1ac9376a17a914d5806cc318aa7368a785b9cb57a909bf0910df880fe3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Brain modeling</topic><topic>Computer Simulation</topic><topic>Deep neural networks</topic><topic>evolutionary algorithm</topic><topic>Evolutionary computation</topic><topic>Feature extraction</topic><topic>Humans</topic><topic>Learning - physiology</topic><topic>Linear programming</topic><topic>Models, Theoretical</topic><topic>multiobjective optimization</topic><topic>Neural networks</topic><topic>Neural Networks (Computer)</topic><topic>Pareto optimization</topic><topic>sparse feature learning</topic><topic>Sparsity</topic><toplevel>online_resources</toplevel><creatorcontrib>Gong, Maoguo</creatorcontrib><creatorcontrib>Liu, Jia</creatorcontrib><creatorcontrib>Li, Hao</creatorcontrib><creatorcontrib>Cai, Qing</creatorcontrib><creatorcontrib>Su, Linzhi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gong, Maoguo</au><au>Liu, Jia</au><au>Li, Hao</au><au>Cai, Qing</au><au>Su, Linzhi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Multiobjective Sparse Feature Learning Model for Deep Neural Networks</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2015-12-01</date><risdate>2015</risdate><volume>26</volume><issue>12</issue><spage>3263</spage><epage>3277</epage><pages>3263-3277</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>26340790</pmid><doi>10.1109/TNNLS.2015.2469673</doi><tpages>15</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2162-237X |
ispartof | IEEE transaction on neural networks and learning systems, 2015-12, Vol.26 (12), p.3263-3277 |
issn | 2162-237X 2162-2388 |
language | eng |
recordid | cdi_proquest_miscellaneous_1736416805 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Artificial Intelligence Brain modeling Computer Simulation Deep neural networks evolutionary algorithm Evolutionary computation Feature extraction Humans Learning - physiology Linear programming Models, Theoretical multiobjective optimization Neural networks Neural Networks (Computer) Pareto optimization sparse feature learning Sparsity |
title | A Multiobjective Sparse Feature Learning Model for Deep Neural Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T05%3A12%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Multiobjective%20Sparse%20Feature%20Learning%20Model%20for%20Deep%20Neural%20Networks&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Gong,%20Maoguo&rft.date=2015-12-01&rft.volume=26&rft.issue=12&rft.spage=3263&rft.epage=3277&rft.pages=3263-3277&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2015.2469673&rft_dat=%3Cproquest_RIE%3E3883600321%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1738834183&rft_id=info:pmid/26340790&rft_ieee_id=7230284&rfr_iscdi=true |