Reducing the Dimensionality of Data with Neural Networks
High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well onl...
Gespeichert in:
Veröffentlicht in: | Science (American Association for the Advancement of Science) 2006-07, Vol.313 (5786), p.504-507 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 507 |
---|---|
container_issue | 5786 |
container_start_page | 504 |
container_title | Science (American Association for the Advancement of Science) |
container_volume | 313 |
creator | Hinton, G.E Salakhutdinov, R.R |
description | High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data. |
doi_str_mv | 10.1126/science.1127647 |
format | Article |
fullrecord | <record><control><sourceid>jstor_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_68688250</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><jstor_id>3846811</jstor_id><sourcerecordid>3846811</sourcerecordid><originalsourceid>FETCH-LOGICAL-c504t-d6ad3201441f93263e5260f25e3d8d0064a50984c37540b641daee5f98e6dd663</originalsourceid><addsrcrecordid>eNqFkU1LHTEUhoNU9Na6dlPaQbC7qSdfZzJL0doKotDWdYhJRnM7d6LJDOK_N7d3qNCNq0N4n7zkPCHkgMJXShkeZxv8YP360KBotsiCQivrlgF_RxYAHGsFjdwl73NeApSs5Ttkl6JqOCJbEPXTu8mG4a4a7311FlZ-yCEOpg_jcxW76syMpnoK43115adk-jLGp5j-5A9kuzN99vvz3CM3599-n_6oL6-_X5yeXNZWghhrh8ZxBlQI2rWcIfeSIXRMeu6UA0BhJLRKWN5IAbcoqDPey65VHp1D5Hvky6b3IcXHyedRr0K2vu_N4OOUNSpUikl4E-RIFVdqDR7-By7jlMrKWTNaqAZYU6DjDWRTzDn5Tj-ksDLpWVPQa_V6Vq9n9eXGp7l2ul1598rPrgtwNAMmW9N3yQw25FdOgZDy7_s-brhlHmP6l3MlUFFa4s-buDNRm7tUKm5-FcMcytdTUIy_AN1unRo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>213617027</pqid></control><display><type>article</type><title>Reducing the Dimensionality of Data with Neural Networks</title><source>JSTOR Archive Collection A-Z Listing</source><source>American Association for the Advancement of Science</source><creator>Hinton, G.E ; Salakhutdinov, R.R</creator><creatorcontrib>Hinton, G.E ; Salakhutdinov, R.R</creatorcontrib><description>High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.</description><identifier>ISSN: 0036-8075</identifier><identifier>EISSN: 1095-9203</identifier><identifier>DOI: 10.1126/science.1127647</identifier><identifier>PMID: 16873662</identifier><identifier>CODEN: SCIEAS</identifier><language>eng</language><publisher>Washington, DC: American Association for the Advancement of Science</publisher><subject>Applied sciences ; Artificial intelligence ; Artificial neural networks ; Computer science ; Computer science; control theory; systems ; Connectionism. Neural networks ; Datasets ; Decryption ; Dimensionality ; Exact sciences and technology ; Image reconstruction ; Logistics ; Neural networks ; Pixels ; Principal components analysis ; Statistical variance ; Training</subject><ispartof>Science (American Association for the Advancement of Science), 2006-07, Vol.313 (5786), p.504-507</ispartof><rights>Copyright 2006 American Association for the Advancement of Science</rights><rights>2006 INIST-CNRS</rights><rights>Copyright American Association for the Advancement of Science Jul 28, 2006</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c504t-d6ad3201441f93263e5260f25e3d8d0064a50984c37540b641daee5f98e6dd663</citedby><cites>FETCH-LOGICAL-c504t-d6ad3201441f93263e5260f25e3d8d0064a50984c37540b641daee5f98e6dd663</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/3846811$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/3846811$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>314,780,784,803,2882,2883,27923,27924,58016,58249</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=18045580$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/16873662$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Hinton, G.E</creatorcontrib><creatorcontrib>Salakhutdinov, R.R</creatorcontrib><title>Reducing the Dimensionality of Data with Neural Networks</title><title>Science (American Association for the Advancement of Science)</title><addtitle>Science</addtitle><description>High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.</description><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Computer science</subject><subject>Computer science; control theory; systems</subject><subject>Connectionism. Neural networks</subject><subject>Datasets</subject><subject>Decryption</subject><subject>Dimensionality</subject><subject>Exact sciences and technology</subject><subject>Image reconstruction</subject><subject>Logistics</subject><subject>Neural networks</subject><subject>Pixels</subject><subject>Principal components analysis</subject><subject>Statistical variance</subject><subject>Training</subject><issn>0036-8075</issn><issn>1095-9203</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2006</creationdate><recordtype>article</recordtype><recordid>eNqFkU1LHTEUhoNU9Na6dlPaQbC7qSdfZzJL0doKotDWdYhJRnM7d6LJDOK_N7d3qNCNq0N4n7zkPCHkgMJXShkeZxv8YP360KBotsiCQivrlgF_RxYAHGsFjdwl73NeApSs5Ttkl6JqOCJbEPXTu8mG4a4a7311FlZ-yCEOpg_jcxW76syMpnoK43115adk-jLGp5j-5A9kuzN99vvz3CM3599-n_6oL6-_X5yeXNZWghhrh8ZxBlQI2rWcIfeSIXRMeu6UA0BhJLRKWN5IAbcoqDPey65VHp1D5Hvky6b3IcXHyedRr0K2vu_N4OOUNSpUikl4E-RIFVdqDR7-By7jlMrKWTNaqAZYU6DjDWRTzDn5Tj-ksDLpWVPQa_V6Vq9n9eXGp7l2ul1598rPrgtwNAMmW9N3yQw25FdOgZDy7_s-brhlHmP6l3MlUFFa4s-buDNRm7tUKm5-FcMcytdTUIy_AN1unRo</recordid><startdate>20060728</startdate><enddate>20060728</enddate><creator>Hinton, G.E</creator><creator>Salakhutdinov, R.R</creator><general>American Association for the Advancement of Science</general><general>The American Association for the Advancement of Science</general><scope>FBQ</scope><scope>IQODW</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QG</scope><scope>7QL</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SN</scope><scope>7SP</scope><scope>7SR</scope><scope>7SS</scope><scope>7T7</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7TM</scope><scope>7U5</scope><scope>7U9</scope><scope>8BQ</scope><scope>8FD</scope><scope>C1K</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>H8G</scope><scope>H94</scope><scope>JG9</scope><scope>JQ2</scope><scope>K9.</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M7N</scope><scope>P64</scope><scope>RC3</scope><scope>7X8</scope></search><sort><creationdate>20060728</creationdate><title>Reducing the Dimensionality of Data with Neural Networks</title><author>Hinton, G.E ; Salakhutdinov, R.R</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c504t-d6ad3201441f93263e5260f25e3d8d0064a50984c37540b641daee5f98e6dd663</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2006</creationdate><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Computer science</topic><topic>Computer science; control theory; systems</topic><topic>Connectionism. Neural networks</topic><topic>Datasets</topic><topic>Decryption</topic><topic>Dimensionality</topic><topic>Exact sciences and technology</topic><topic>Image reconstruction</topic><topic>Logistics</topic><topic>Neural networks</topic><topic>Pixels</topic><topic>Principal components analysis</topic><topic>Statistical variance</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hinton, G.E</creatorcontrib><creatorcontrib>Salakhutdinov, R.R</creatorcontrib><collection>AGRIS</collection><collection>Pascal-Francis</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Ecology Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Industrial and Applied Microbiology Abstracts (Microbiology A)</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Copper Technical Reference Library</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>Science (American Association for the Advancement of Science)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hinton, G.E</au><au>Salakhutdinov, R.R</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Reducing the Dimensionality of Data with Neural Networks</atitle><jtitle>Science (American Association for the Advancement of Science)</jtitle><addtitle>Science</addtitle><date>2006-07-28</date><risdate>2006</risdate><volume>313</volume><issue>5786</issue><spage>504</spage><epage>507</epage><pages>504-507</pages><issn>0036-8075</issn><eissn>1095-9203</eissn><coden>SCIEAS</coden><abstract>High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.</abstract><cop>Washington, DC</cop><pub>American Association for the Advancement of Science</pub><pmid>16873662</pmid><doi>10.1126/science.1127647</doi><tpages>4</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0036-8075 |
ispartof | Science (American Association for the Advancement of Science), 2006-07, Vol.313 (5786), p.504-507 |
issn | 0036-8075 1095-9203 |
language | eng |
recordid | cdi_proquest_miscellaneous_68688250 |
source | JSTOR Archive Collection A-Z Listing; American Association for the Advancement of Science |
subjects | Applied sciences Artificial intelligence Artificial neural networks Computer science Computer science control theory systems Connectionism. Neural networks Datasets Decryption Dimensionality Exact sciences and technology Image reconstruction Logistics Neural networks Pixels Principal components analysis Statistical variance Training |
title | Reducing the Dimensionality of Data with Neural Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T03%3A01%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Reducing%20the%20Dimensionality%20of%20Data%20with%20Neural%20Networks&rft.jtitle=Science%20(American%20Association%20for%20the%20Advancement%20of%20Science)&rft.au=Hinton,%20G.E&rft.date=2006-07-28&rft.volume=313&rft.issue=5786&rft.spage=504&rft.epage=507&rft.pages=504-507&rft.issn=0036-8075&rft.eissn=1095-9203&rft.coden=SCIEAS&rft_id=info:doi/10.1126/science.1127647&rft_dat=%3Cjstor_proqu%3E3846811%3C/jstor_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=213617027&rft_id=info:pmid/16873662&rft_jstor_id=3846811&rfr_iscdi=true |