Holographic Neural Architectures

Representation learning is at the heart of what makes deep learning effective. In this work, we introduce a new framework for representation learning that we call "Holographic Neural Architectures" (HNAs). In the same way that an observer can experience the 3D structure of a holographed ob...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Daouda, Tariq, Zumer, Jeremie, Perreault, Claude, Lemieux, Sébastien
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Daouda, Tariq
Zumer, Jeremie
Perreault, Claude
Lemieux, Sébastien
description Representation learning is at the heart of what makes deep learning effective. In this work, we introduce a new framework for representation learning that we call "Holographic Neural Architectures" (HNAs). In the same way that an observer can experience the 3D structure of a holographed object by looking at its hologram from several angles, HNAs derive Holographic Representations from the training set. These representations can then be explored by moving along a continuous bounded single dimension. We show that HNAs can be used to make generative networks, state-of-the-art regression models and that they are inherently highly resistant to noise. Finally, we argue that because of their denoising abilities and their capacity to generalize well from very few examples, models based upon HNAs are particularly well suited for biological applications where training examples are rare or noisy.
doi_str_mv 10.48550/arxiv.1806.00931
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1806_00931</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1806_00931</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-262e4a1fff55f2ab2865e58afb7f8f408f5c93bc35d95b846f40b0941dcc405f3</originalsourceid><addsrcrecordid>eNotzr0KwjAUhuEsDqJegJO9gdakyYnpKMU_KLq4l5OYYwuVSqyid69Wpw_e4eNhbCp4ogwAn2N41o9EGK4TzjMphizatk17Dnitahft_T1gEy2Dq-rOu-4e_G3MBoTNzU_-O2LH9eqYb-PisNnlyyJGvRBxqlOvUBARAKVoU6PBg0GyCzKkuCFwmbROwikDa5T-NMszJU7OKQ4kR2z2u-2J5TXUFwyv8kste6p8A-JFOHo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Holographic Neural Architectures</title><source>arXiv.org</source><creator>Daouda, Tariq ; Zumer, Jeremie ; Perreault, Claude ; Lemieux, Sébastien</creator><creatorcontrib>Daouda, Tariq ; Zumer, Jeremie ; Perreault, Claude ; Lemieux, Sébastien</creatorcontrib><description>Representation learning is at the heart of what makes deep learning effective. In this work, we introduce a new framework for representation learning that we call "Holographic Neural Architectures" (HNAs). In the same way that an observer can experience the 3D structure of a holographed object by looking at its hologram from several angles, HNAs derive Holographic Representations from the training set. These representations can then be explored by moving along a continuous bounded single dimension. We show that HNAs can be used to make generative networks, state-of-the-art regression models and that they are inherently highly resistant to noise. Finally, we argue that because of their denoising abilities and their capacity to generalize well from very few examples, models based upon HNAs are particularly well suited for biological applications where training examples are rare or noisy.</description><identifier>DOI: 10.48550/arxiv.1806.00931</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Quantitative Biology - Genomics ; Quantitative Biology - Tissues and Organs ; Statistics - Machine Learning</subject><creationdate>2018-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1806.00931$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1806.00931$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Daouda, Tariq</creatorcontrib><creatorcontrib>Zumer, Jeremie</creatorcontrib><creatorcontrib>Perreault, Claude</creatorcontrib><creatorcontrib>Lemieux, Sébastien</creatorcontrib><title>Holographic Neural Architectures</title><description>Representation learning is at the heart of what makes deep learning effective. In this work, we introduce a new framework for representation learning that we call "Holographic Neural Architectures" (HNAs). In the same way that an observer can experience the 3D structure of a holographed object by looking at its hologram from several angles, HNAs derive Holographic Representations from the training set. These representations can then be explored by moving along a continuous bounded single dimension. We show that HNAs can be used to make generative networks, state-of-the-art regression models and that they are inherently highly resistant to noise. Finally, we argue that because of their denoising abilities and their capacity to generalize well from very few examples, models based upon HNAs are particularly well suited for biological applications where training examples are rare or noisy.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Quantitative Biology - Genomics</subject><subject>Quantitative Biology - Tissues and Organs</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr0KwjAUhuEsDqJegJO9gdakyYnpKMU_KLq4l5OYYwuVSqyid69Wpw_e4eNhbCp4ogwAn2N41o9EGK4TzjMphizatk17Dnitahft_T1gEy2Dq-rOu-4e_G3MBoTNzU_-O2LH9eqYb-PisNnlyyJGvRBxqlOvUBARAKVoU6PBg0GyCzKkuCFwmbROwikDa5T-NMszJU7OKQ4kR2z2u-2J5TXUFwyv8kste6p8A-JFOHo</recordid><startdate>20180603</startdate><enddate>20180603</enddate><creator>Daouda, Tariq</creator><creator>Zumer, Jeremie</creator><creator>Perreault, Claude</creator><creator>Lemieux, Sébastien</creator><scope>AKY</scope><scope>ALC</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20180603</creationdate><title>Holographic Neural Architectures</title><author>Daouda, Tariq ; Zumer, Jeremie ; Perreault, Claude ; Lemieux, Sébastien</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-262e4a1fff55f2ab2865e58afb7f8f408f5c93bc35d95b846f40b0941dcc405f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Quantitative Biology - Genomics</topic><topic>Quantitative Biology - Tissues and Organs</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Daouda, Tariq</creatorcontrib><creatorcontrib>Zumer, Jeremie</creatorcontrib><creatorcontrib>Perreault, Claude</creatorcontrib><creatorcontrib>Lemieux, Sébastien</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Quantitative Biology</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Daouda, Tariq</au><au>Zumer, Jeremie</au><au>Perreault, Claude</au><au>Lemieux, Sébastien</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Holographic Neural Architectures</atitle><date>2018-06-03</date><risdate>2018</risdate><abstract>Representation learning is at the heart of what makes deep learning effective. In this work, we introduce a new framework for representation learning that we call "Holographic Neural Architectures" (HNAs). In the same way that an observer can experience the 3D structure of a holographed object by looking at its hologram from several angles, HNAs derive Holographic Representations from the training set. These representations can then be explored by moving along a continuous bounded single dimension. We show that HNAs can be used to make generative networks, state-of-the-art regression models and that they are inherently highly resistant to noise. Finally, we argue that because of their denoising abilities and their capacity to generalize well from very few examples, models based upon HNAs are particularly well suited for biological applications where training examples are rare or noisy.</abstract><doi>10.48550/arxiv.1806.00931</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1806.00931
ispartof
issn
language eng
recordid cdi_arxiv_primary_1806_00931
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Quantitative Biology - Genomics
Quantitative Biology - Tissues and Organs
Statistics - Machine Learning
title Holographic Neural Architectures
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T17%3A56%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Holographic%20Neural%20Architectures&rft.au=Daouda,%20Tariq&rft.date=2018-06-03&rft_id=info:doi/10.48550/arxiv.1806.00931&rft_dat=%3Carxiv_GOX%3E1806_00931%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true