Recursive Sketches for Modular Deep Learning
We present a mechanism to compute a sketch (succinct summary) of how a complex modular deep network processes its inputs. The sketch summarizes essential information about the inputs and outputs of the network and can be used to quickly identify key components and summary statistics of the inputs. F...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Ghazi, Badih Panigrahy, Rina Wang, Joshua R |
description | We present a mechanism to compute a sketch (succinct summary) of how a
complex modular deep network processes its inputs. The sketch summarizes
essential information about the inputs and outputs of the network and can be
used to quickly identify key components and summary statistics of the inputs.
Furthermore, the sketch is recursive and can be unrolled to identify
sub-components of these components and so forth, capturing a potentially
complicated DAG structure. These sketches erase gracefully; even if we erase a
fraction of the sketch at random, the remainder still retains the `high-weight'
information present in the original sketch. The sketches can also be organized
in a repository to implicitly form a `knowledge graph'; it is possible to
quickly retrieve sketches in the repository that are related to a sketch of
interest; arranged in this fashion, the sketches can also be used to learn
emerging concepts by looking for new clusters in sketch space. Finally, in the
scenario where we want to learn a ground truth deep network, we show that
augmenting input/output pairs with these sketches can theoretically make it
easier to do so. |
doi_str_mv | 10.48550/arxiv.1905.12730 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1905_12730</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1905_12730</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-9cfdff223f5023848a363254516290d597d63c858fa94cf39924722844ca32fb3</originalsourceid><addsrcrecordid>eNotzk1uwjAQQGFvWFSBA3SFD9AEZ8ZO7GXFT4sUhETZR1PHUyJSQA6g9vZVgdXbPX1CPOcq09YYNaH4016z3CmT5VCiehIvm-AvsW-vQX7sw9nvQi_5GOXq2Fw6inIWwklWgeKhPXwNxYCp68Po0URsF_Pt9D2t1m_L6WuVUlGq1HlumAGQjQK02hIWCEabvACnGuPKpkBvjWVy2jM6B7oEsFp7QuBPTMT4vr1x61Nsvyn-1v_s-sbGP5TGOv4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Recursive Sketches for Modular Deep Learning</title><source>arXiv.org</source><creator>Ghazi, Badih ; Panigrahy, Rina ; Wang, Joshua R</creator><creatorcontrib>Ghazi, Badih ; Panigrahy, Rina ; Wang, Joshua R</creatorcontrib><description>We present a mechanism to compute a sketch (succinct summary) of how a
complex modular deep network processes its inputs. The sketch summarizes
essential information about the inputs and outputs of the network and can be
used to quickly identify key components and summary statistics of the inputs.
Furthermore, the sketch is recursive and can be unrolled to identify
sub-components of these components and so forth, capturing a potentially
complicated DAG structure. These sketches erase gracefully; even if we erase a
fraction of the sketch at random, the remainder still retains the `high-weight'
information present in the original sketch. The sketches can also be organized
in a repository to implicitly form a `knowledge graph'; it is possible to
quickly retrieve sketches in the repository that are related to a sketch of
interest; arranged in this fashion, the sketches can also be used to learn
emerging concepts by looking for new clusters in sketch space. Finally, in the
scenario where we want to learn a ground truth deep network, we show that
augmenting input/output pairs with these sketches can theoretically make it
easier to do so.</description><identifier>DOI: 10.48550/arxiv.1905.12730</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Data Structures and Algorithms ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1905.12730$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1905.12730$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ghazi, Badih</creatorcontrib><creatorcontrib>Panigrahy, Rina</creatorcontrib><creatorcontrib>Wang, Joshua R</creatorcontrib><title>Recursive Sketches for Modular Deep Learning</title><description>We present a mechanism to compute a sketch (succinct summary) of how a
complex modular deep network processes its inputs. The sketch summarizes
essential information about the inputs and outputs of the network and can be
used to quickly identify key components and summary statistics of the inputs.
Furthermore, the sketch is recursive and can be unrolled to identify
sub-components of these components and so forth, capturing a potentially
complicated DAG structure. These sketches erase gracefully; even if we erase a
fraction of the sketch at random, the remainder still retains the `high-weight'
information present in the original sketch. The sketches can also be organized
in a repository to implicitly form a `knowledge graph'; it is possible to
quickly retrieve sketches in the repository that are related to a sketch of
interest; arranged in this fashion, the sketches can also be used to learn
emerging concepts by looking for new clusters in sketch space. Finally, in the
scenario where we want to learn a ground truth deep network, we show that
augmenting input/output pairs with these sketches can theoretically make it
easier to do so.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Data Structures and Algorithms</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzk1uwjAQQGFvWFSBA3SFD9AEZ8ZO7GXFT4sUhETZR1PHUyJSQA6g9vZVgdXbPX1CPOcq09YYNaH4016z3CmT5VCiehIvm-AvsW-vQX7sw9nvQi_5GOXq2Fw6inIWwklWgeKhPXwNxYCp68Po0URsF_Pt9D2t1m_L6WuVUlGq1HlumAGQjQK02hIWCEabvACnGuPKpkBvjWVy2jM6B7oEsFp7QuBPTMT4vr1x61Nsvyn-1v_s-sbGP5TGOv4</recordid><startdate>20190529</startdate><enddate>20190529</enddate><creator>Ghazi, Badih</creator><creator>Panigrahy, Rina</creator><creator>Wang, Joshua R</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190529</creationdate><title>Recursive Sketches for Modular Deep Learning</title><author>Ghazi, Badih ; Panigrahy, Rina ; Wang, Joshua R</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-9cfdff223f5023848a363254516290d597d63c858fa94cf39924722844ca32fb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Data Structures and Algorithms</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ghazi, Badih</creatorcontrib><creatorcontrib>Panigrahy, Rina</creatorcontrib><creatorcontrib>Wang, Joshua R</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ghazi, Badih</au><au>Panigrahy, Rina</au><au>Wang, Joshua R</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Recursive Sketches for Modular Deep Learning</atitle><date>2019-05-29</date><risdate>2019</risdate><abstract>We present a mechanism to compute a sketch (succinct summary) of how a
complex modular deep network processes its inputs. The sketch summarizes
essential information about the inputs and outputs of the network and can be
used to quickly identify key components and summary statistics of the inputs.
Furthermore, the sketch is recursive and can be unrolled to identify
sub-components of these components and so forth, capturing a potentially
complicated DAG structure. These sketches erase gracefully; even if we erase a
fraction of the sketch at random, the remainder still retains the `high-weight'
information present in the original sketch. The sketches can also be organized
in a repository to implicitly form a `knowledge graph'; it is possible to
quickly retrieve sketches in the repository that are related to a sketch of
interest; arranged in this fashion, the sketches can also be used to learn
emerging concepts by looking for new clusters in sketch space. Finally, in the
scenario where we want to learn a ground truth deep network, we show that
augmenting input/output pairs with these sketches can theoretically make it
easier to do so.</abstract><doi>10.48550/arxiv.1905.12730</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1905.12730 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1905_12730 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Data Structures and Algorithms Computer Science - Learning Statistics - Machine Learning |
title | Recursive Sketches for Modular Deep Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T17%3A27%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Recursive%20Sketches%20for%20Modular%20Deep%20Learning&rft.au=Ghazi,%20Badih&rft.date=2019-05-29&rft_id=info:doi/10.48550/arxiv.1905.12730&rft_dat=%3Carxiv_GOX%3E1905_12730%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |