Random sketch learning for deep neural networks in edge computing

Despite the great potential of deep neural networks (DNNs), they require massive weights and huge computational resources, creating a vast gap when deploying artificial intelligence at low-cost edge devices. Current lightweight DNNs, achieved by high-dimensional space pre-training and post-compressi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature Computational Science 2021-03, Vol.1 (3), p.221-228
Hauptverfasser: Li, Bin, Chen, Peijun, Liu, Hongfu, Guo, Weisi, Cao, Xianbin, Du, Junzhao, Zhao, Chenglin, Zhang, Jun
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 228
container_issue 3
container_start_page 221
container_title Nature Computational Science
container_volume 1
creator Li, Bin
Chen, Peijun
Liu, Hongfu
Guo, Weisi
Cao, Xianbin
Du, Junzhao
Zhao, Chenglin
Zhang, Jun
description Despite the great potential of deep neural networks (DNNs), they require massive weights and huge computational resources, creating a vast gap when deploying artificial intelligence at low-cost edge devices. Current lightweight DNNs, achieved by high-dimensional space pre-training and post-compression, present challenges when covering the resources deficit, making tiny artificial intelligence hard to be implemented. Here we report an architecture named random sketch learning, or Rosler, for computationally efficient tiny artificial intelligence. We build a universal compressing-while-training framework that directly learns a compact model and, most importantly, enables computationally efficient on-device learning. As validated on different models and datasets, it attains substantial memory reduction of ~50-90× (16-bits quantization), compared with fully connected DNNs. We demonstrate it on low-cost hardware, whereby the computation is accelerated by >180× and the energy consumption is reduced by ~10×. Our method paves the way for deploying tiny artificial intelligence in many scientific and industrial applications.
doi_str_mv 10.1038/s43588-021-00039-6
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2928996572</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2928996572</sourcerecordid><originalsourceid>FETCH-LOGICAL-c347t-4967299ea65216437e5b853c9768f7ccb3f1b2f4d523a0f250549e489a3bcd633</originalsourceid><addsrcrecordid>eNpNkMtOwzAURC0EolXpD7BAXrIJ2L6xYy-ripdUCQnB2nKcmxKaF3YixN8TaEGsZhZnZnEIOefsijPQ1zEFqXXCBE8YY2ASdUTmQimR6FRmx__6jCxjfJsgITkwBadkBppr4EbNyerJtUXX0LjDwb_SGl1oq3ZLyy7QArGnLY7B1VMMH13YRVq1FIstUt81_ThM6Bk5KV0dcXnIBXm5vXle3yebx7uH9WqTeEizIUmNyoQx6JQUXKWQocy1BG8ypcvM-xxKnosyLaQAx0ohmUwNpto4yH2hABbkcv_bh-59xDjYpooe69q12I3RCiO0MUpmYkLFHvWhizFgaftQNS58Ws7stz27t2cne_bHnlXT6OLwP-YNFn-TX1fwBQBlaTY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2928996572</pqid></control><display><type>article</type><title>Random sketch learning for deep neural networks in edge computing</title><source>SpringerLink Journals</source><creator>Li, Bin ; Chen, Peijun ; Liu, Hongfu ; Guo, Weisi ; Cao, Xianbin ; Du, Junzhao ; Zhao, Chenglin ; Zhang, Jun</creator><creatorcontrib>Li, Bin ; Chen, Peijun ; Liu, Hongfu ; Guo, Weisi ; Cao, Xianbin ; Du, Junzhao ; Zhao, Chenglin ; Zhang, Jun</creatorcontrib><description>Despite the great potential of deep neural networks (DNNs), they require massive weights and huge computational resources, creating a vast gap when deploying artificial intelligence at low-cost edge devices. Current lightweight DNNs, achieved by high-dimensional space pre-training and post-compression, present challenges when covering the resources deficit, making tiny artificial intelligence hard to be implemented. Here we report an architecture named random sketch learning, or Rosler, for computationally efficient tiny artificial intelligence. We build a universal compressing-while-training framework that directly learns a compact model and, most importantly, enables computationally efficient on-device learning. As validated on different models and datasets, it attains substantial memory reduction of ~50-90× (16-bits quantization), compared with fully connected DNNs. We demonstrate it on low-cost hardware, whereby the computation is accelerated by &gt;180× and the energy consumption is reduced by ~10×. Our method paves the way for deploying tiny artificial intelligence in many scientific and industrial applications.</description><identifier>ISSN: 2662-8457</identifier><identifier>EISSN: 2662-8457</identifier><identifier>DOI: 10.1038/s43588-021-00039-6</identifier><identifier>PMID: 38183196</identifier><language>eng</language><publisher>United States</publisher><ispartof>Nature Computational Science, 2021-03, Vol.1 (3), p.221-228</ispartof><rights>2021. The Author(s), under exclusive licence to Springer Nature America, Inc.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c347t-4967299ea65216437e5b853c9768f7ccb3f1b2f4d523a0f250549e489a3bcd633</citedby><cites>FETCH-LOGICAL-c347t-4967299ea65216437e5b853c9768f7ccb3f1b2f4d523a0f250549e489a3bcd633</cites><orcidid>0000-0002-5042-7884 ; 0000-0002-1998-819X ; 0000-0002-3543-9916</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38183196$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Bin</creatorcontrib><creatorcontrib>Chen, Peijun</creatorcontrib><creatorcontrib>Liu, Hongfu</creatorcontrib><creatorcontrib>Guo, Weisi</creatorcontrib><creatorcontrib>Cao, Xianbin</creatorcontrib><creatorcontrib>Du, Junzhao</creatorcontrib><creatorcontrib>Zhao, Chenglin</creatorcontrib><creatorcontrib>Zhang, Jun</creatorcontrib><title>Random sketch learning for deep neural networks in edge computing</title><title>Nature Computational Science</title><addtitle>Nat Comput Sci</addtitle><description>Despite the great potential of deep neural networks (DNNs), they require massive weights and huge computational resources, creating a vast gap when deploying artificial intelligence at low-cost edge devices. Current lightweight DNNs, achieved by high-dimensional space pre-training and post-compression, present challenges when covering the resources deficit, making tiny artificial intelligence hard to be implemented. Here we report an architecture named random sketch learning, or Rosler, for computationally efficient tiny artificial intelligence. We build a universal compressing-while-training framework that directly learns a compact model and, most importantly, enables computationally efficient on-device learning. As validated on different models and datasets, it attains substantial memory reduction of ~50-90× (16-bits quantization), compared with fully connected DNNs. We demonstrate it on low-cost hardware, whereby the computation is accelerated by &gt;180× and the energy consumption is reduced by ~10×. Our method paves the way for deploying tiny artificial intelligence in many scientific and industrial applications.</description><issn>2662-8457</issn><issn>2662-8457</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNpNkMtOwzAURC0EolXpD7BAXrIJ2L6xYy-ripdUCQnB2nKcmxKaF3YixN8TaEGsZhZnZnEIOefsijPQ1zEFqXXCBE8YY2ASdUTmQimR6FRmx__6jCxjfJsgITkwBadkBppr4EbNyerJtUXX0LjDwb_SGl1oq3ZLyy7QArGnLY7B1VMMH13YRVq1FIstUt81_ThM6Bk5KV0dcXnIBXm5vXle3yebx7uH9WqTeEizIUmNyoQx6JQUXKWQocy1BG8ypcvM-xxKnosyLaQAx0ohmUwNpto4yH2hABbkcv_bh-59xDjYpooe69q12I3RCiO0MUpmYkLFHvWhizFgaftQNS58Ws7stz27t2cne_bHnlXT6OLwP-YNFn-TX1fwBQBlaTY</recordid><startdate>20210301</startdate><enddate>20210301</enddate><creator>Li, Bin</creator><creator>Chen, Peijun</creator><creator>Liu, Hongfu</creator><creator>Guo, Weisi</creator><creator>Cao, Xianbin</creator><creator>Du, Junzhao</creator><creator>Zhao, Chenglin</creator><creator>Zhang, Jun</creator><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-5042-7884</orcidid><orcidid>https://orcid.org/0000-0002-1998-819X</orcidid><orcidid>https://orcid.org/0000-0002-3543-9916</orcidid></search><sort><creationdate>20210301</creationdate><title>Random sketch learning for deep neural networks in edge computing</title><author>Li, Bin ; Chen, Peijun ; Liu, Hongfu ; Guo, Weisi ; Cao, Xianbin ; Du, Junzhao ; Zhao, Chenglin ; Zhang, Jun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c347t-4967299ea65216437e5b853c9768f7ccb3f1b2f4d523a0f250549e489a3bcd633</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Bin</creatorcontrib><creatorcontrib>Chen, Peijun</creatorcontrib><creatorcontrib>Liu, Hongfu</creatorcontrib><creatorcontrib>Guo, Weisi</creatorcontrib><creatorcontrib>Cao, Xianbin</creatorcontrib><creatorcontrib>Du, Junzhao</creatorcontrib><creatorcontrib>Zhao, Chenglin</creatorcontrib><creatorcontrib>Zhang, Jun</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Nature Computational Science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Bin</au><au>Chen, Peijun</au><au>Liu, Hongfu</au><au>Guo, Weisi</au><au>Cao, Xianbin</au><au>Du, Junzhao</au><au>Zhao, Chenglin</au><au>Zhang, Jun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Random sketch learning for deep neural networks in edge computing</atitle><jtitle>Nature Computational Science</jtitle><addtitle>Nat Comput Sci</addtitle><date>2021-03-01</date><risdate>2021</risdate><volume>1</volume><issue>3</issue><spage>221</spage><epage>228</epage><pages>221-228</pages><issn>2662-8457</issn><eissn>2662-8457</eissn><abstract>Despite the great potential of deep neural networks (DNNs), they require massive weights and huge computational resources, creating a vast gap when deploying artificial intelligence at low-cost edge devices. Current lightweight DNNs, achieved by high-dimensional space pre-training and post-compression, present challenges when covering the resources deficit, making tiny artificial intelligence hard to be implemented. Here we report an architecture named random sketch learning, or Rosler, for computationally efficient tiny artificial intelligence. We build a universal compressing-while-training framework that directly learns a compact model and, most importantly, enables computationally efficient on-device learning. As validated on different models and datasets, it attains substantial memory reduction of ~50-90× (16-bits quantization), compared with fully connected DNNs. We demonstrate it on low-cost hardware, whereby the computation is accelerated by &gt;180× and the energy consumption is reduced by ~10×. Our method paves the way for deploying tiny artificial intelligence in many scientific and industrial applications.</abstract><cop>United States</cop><pmid>38183196</pmid><doi>10.1038/s43588-021-00039-6</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-5042-7884</orcidid><orcidid>https://orcid.org/0000-0002-1998-819X</orcidid><orcidid>https://orcid.org/0000-0002-3543-9916</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2662-8457
ispartof Nature Computational Science, 2021-03, Vol.1 (3), p.221-228
issn 2662-8457
2662-8457
language eng
recordid cdi_proquest_miscellaneous_2928996572
source SpringerLink Journals
title Random sketch learning for deep neural networks in edge computing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T09%3A48%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Random%20sketch%20learning%20for%20deep%20neural%20networks%20in%20edge%20computing&rft.jtitle=Nature%20Computational%20Science&rft.au=Li,%20Bin&rft.date=2021-03-01&rft.volume=1&rft.issue=3&rft.spage=221&rft.epage=228&rft.pages=221-228&rft.issn=2662-8457&rft.eissn=2662-8457&rft_id=info:doi/10.1038/s43588-021-00039-6&rft_dat=%3Cproquest_cross%3E2928996572%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2928996572&rft_id=info:pmid/38183196&rfr_iscdi=true