Runtime Network Routing for Efficient Image Classification

In this paper, we propose a generic Runtime Network Routing (RNR) framework for efficient image classification, which selects an optimal path inside the network. Unlike existing static neural network acceleration methods, our method preserves the full ability of the original large network and conduc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence 2019-10, Vol.41 (10), p.2291-2304
Hauptverfasser: Rao, Yongming, Lu, Jiwen, Lin, Ji, Zhou, Jie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2304
container_issue 10
container_start_page 2291
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 41
creator Rao, Yongming
Lu, Jiwen
Lin, Ji
Zhou, Jie
description In this paper, we propose a generic Runtime Network Routing (RNR) framework for efficient image classification, which selects an optimal path inside the network. Unlike existing static neural network acceleration methods, our method preserves the full ability of the original large network and conducts dynamic routing at runtime according to the input image and current feature maps. The routing is performed in a bottom-up, layer-by-layer manner, where we model it as a Markov decision process and use reinforcement learning for training. The agent determines the estimated reward of each sub-path and conducts routing conditioned on different samples, where a faster path is taken when the image is easier for the task. Since the ability of network is fully preserved, the balance point is easily adjustable according to the available resources. We test our method on both multi-path residual networks and incremental convolutional channel pruning, and show that RNR consistently outperforms static methods at the same computation complexity on both the CIFAR and ImageNet datasets. Our method can also be applied to off-the-shelf neural network structures and easily extended to other application scenarios.
doi_str_mv 10.1109/TPAMI.2018.2878258
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2285331941</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8510920</ieee_id><sourcerecordid>2285331941</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-819e9e6f21c57945affa43e6e3474a6211771588c27b6d803d1fbedb9aa4f2b83</originalsourceid><addsrcrecordid>eNpdkE1Lw0AQhhdRbK3-AQUJePGSurObTXa9lVK1UD8o9bxsktmytUlqNkH896a29uBpYOZ5X4aHkEugQwCq7hZvo-fpkFGQQyYTyYQ8In1QXIVccHVM-hRiFkrJZI-ceb-iFCJB-SnpccoT4EL0yf28LRtXYPCCzVdVfwTzqm1cuQxsVQcTa13msGyCaWGWGIzXxnvX7UzjqvKcnFiz9nixnwPy_jBZjJ_C2evjdDyahRkX0IQSFCqMLYNMJCoSxloTcYyRR0lkYgaQJCCkzFiSxrmkPAebYp4qYyLLUskH5HbXu6mrzxZ9owvnM1yvTYlV6zUDFitKBUQdevMPXVVtXXbfacak4BxUBB3FdlRWV97XaPWmdoWpvzVQvTWrf83qrVm9N9uFrvfVbVpgfoj8qeyAqx3gEPFwlqLrY5T_ALUFezQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2285331941</pqid></control><display><type>article</type><title>Runtime Network Routing for Efficient Image Classification</title><source>IEEE Electronic Library (IEL)</source><creator>Rao, Yongming ; Lu, Jiwen ; Lin, Ji ; Zhou, Jie</creator><creatorcontrib>Rao, Yongming ; Lu, Jiwen ; Lin, Ji ; Zhou, Jie</creatorcontrib><description>In this paper, we propose a generic Runtime Network Routing (RNR) framework for efficient image classification, which selects an optimal path inside the network. Unlike existing static neural network acceleration methods, our method preserves the full ability of the original large network and conducts dynamic routing at runtime according to the input image and current feature maps. The routing is performed in a bottom-up, layer-by-layer manner, where we model it as a Markov decision process and use reinforcement learning for training. The agent determines the estimated reward of each sub-path and conducts routing conditioned on different samples, where a faster path is taken when the image is easier for the task. Since the ability of network is fully preserved, the balance point is easily adjustable according to the available resources. We test our method on both multi-path residual networks and incremental convolutional channel pruning, and show that RNR consistently outperforms static methods at the same computation complexity on both the CIFAR and ImageNet datasets. Our method can also be applied to off-the-shelf neural network structures and easily extended to other application scenarios.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2018.2878258</identifier><identifier>PMID: 30371355</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Acceleration ; Computational modeling ; Conditioning ; deep learning ; Deep network compression ; efficient inference model ; Feature maps ; Image classification ; Markov analysis ; Markov chains ; Neural networks ; Pruning ; reinforcement learning ; Routing ; Run time (computers) ; Runtime ; Test procedures ; Training</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2019-10, Vol.41 (10), p.2291-2304</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-819e9e6f21c57945affa43e6e3474a6211771588c27b6d803d1fbedb9aa4f2b83</citedby><cites>FETCH-LOGICAL-c351t-819e9e6f21c57945affa43e6e3474a6211771588c27b6d803d1fbedb9aa4f2b83</cites><orcidid>0000-0001-6053-4344 ; 0000-0002-6121-5529 ; 0000-0003-3952-8753</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8510920$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8510920$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30371355$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Rao, Yongming</creatorcontrib><creatorcontrib>Lu, Jiwen</creatorcontrib><creatorcontrib>Lin, Ji</creatorcontrib><creatorcontrib>Zhou, Jie</creatorcontrib><title>Runtime Network Routing for Efficient Image Classification</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>In this paper, we propose a generic Runtime Network Routing (RNR) framework for efficient image classification, which selects an optimal path inside the network. Unlike existing static neural network acceleration methods, our method preserves the full ability of the original large network and conducts dynamic routing at runtime according to the input image and current feature maps. The routing is performed in a bottom-up, layer-by-layer manner, where we model it as a Markov decision process and use reinforcement learning for training. The agent determines the estimated reward of each sub-path and conducts routing conditioned on different samples, where a faster path is taken when the image is easier for the task. Since the ability of network is fully preserved, the balance point is easily adjustable according to the available resources. We test our method on both multi-path residual networks and incremental convolutional channel pruning, and show that RNR consistently outperforms static methods at the same computation complexity on both the CIFAR and ImageNet datasets. Our method can also be applied to off-the-shelf neural network structures and easily extended to other application scenarios.</description><subject>Acceleration</subject><subject>Computational modeling</subject><subject>Conditioning</subject><subject>deep learning</subject><subject>Deep network compression</subject><subject>efficient inference model</subject><subject>Feature maps</subject><subject>Image classification</subject><subject>Markov analysis</subject><subject>Markov chains</subject><subject>Neural networks</subject><subject>Pruning</subject><subject>reinforcement learning</subject><subject>Routing</subject><subject>Run time (computers)</subject><subject>Runtime</subject><subject>Test procedures</subject><subject>Training</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1Lw0AQhhdRbK3-AQUJePGSurObTXa9lVK1UD8o9bxsktmytUlqNkH896a29uBpYOZ5X4aHkEugQwCq7hZvo-fpkFGQQyYTyYQ8In1QXIVccHVM-hRiFkrJZI-ceb-iFCJB-SnpccoT4EL0yf28LRtXYPCCzVdVfwTzqm1cuQxsVQcTa13msGyCaWGWGIzXxnvX7UzjqvKcnFiz9nixnwPy_jBZjJ_C2evjdDyahRkX0IQSFCqMLYNMJCoSxloTcYyRR0lkYgaQJCCkzFiSxrmkPAebYp4qYyLLUskH5HbXu6mrzxZ9owvnM1yvTYlV6zUDFitKBUQdevMPXVVtXXbfacak4BxUBB3FdlRWV97XaPWmdoWpvzVQvTWrf83qrVm9N9uFrvfVbVpgfoj8qeyAqx3gEPFwlqLrY5T_ALUFezQ</recordid><startdate>20191001</startdate><enddate>20191001</enddate><creator>Rao, Yongming</creator><creator>Lu, Jiwen</creator><creator>Lin, Ji</creator><creator>Zhou, Jie</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-6053-4344</orcidid><orcidid>https://orcid.org/0000-0002-6121-5529</orcidid><orcidid>https://orcid.org/0000-0003-3952-8753</orcidid></search><sort><creationdate>20191001</creationdate><title>Runtime Network Routing for Efficient Image Classification</title><author>Rao, Yongming ; Lu, Jiwen ; Lin, Ji ; Zhou, Jie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-819e9e6f21c57945affa43e6e3474a6211771588c27b6d803d1fbedb9aa4f2b83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Acceleration</topic><topic>Computational modeling</topic><topic>Conditioning</topic><topic>deep learning</topic><topic>Deep network compression</topic><topic>efficient inference model</topic><topic>Feature maps</topic><topic>Image classification</topic><topic>Markov analysis</topic><topic>Markov chains</topic><topic>Neural networks</topic><topic>Pruning</topic><topic>reinforcement learning</topic><topic>Routing</topic><topic>Run time (computers)</topic><topic>Runtime</topic><topic>Test procedures</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rao, Yongming</creatorcontrib><creatorcontrib>Lu, Jiwen</creatorcontrib><creatorcontrib>Lin, Ji</creatorcontrib><creatorcontrib>Zhou, Jie</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Rao, Yongming</au><au>Lu, Jiwen</au><au>Lin, Ji</au><au>Zhou, Jie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Runtime Network Routing for Efficient Image Classification</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2019-10-01</date><risdate>2019</risdate><volume>41</volume><issue>10</issue><spage>2291</spage><epage>2304</epage><pages>2291-2304</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>In this paper, we propose a generic Runtime Network Routing (RNR) framework for efficient image classification, which selects an optimal path inside the network. Unlike existing static neural network acceleration methods, our method preserves the full ability of the original large network and conducts dynamic routing at runtime according to the input image and current feature maps. The routing is performed in a bottom-up, layer-by-layer manner, where we model it as a Markov decision process and use reinforcement learning for training. The agent determines the estimated reward of each sub-path and conducts routing conditioned on different samples, where a faster path is taken when the image is easier for the task. Since the ability of network is fully preserved, the balance point is easily adjustable according to the available resources. We test our method on both multi-path residual networks and incremental convolutional channel pruning, and show that RNR consistently outperforms static methods at the same computation complexity on both the CIFAR and ImageNet datasets. Our method can also be applied to off-the-shelf neural network structures and easily extended to other application scenarios.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>30371355</pmid><doi>10.1109/TPAMI.2018.2878258</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-6053-4344</orcidid><orcidid>https://orcid.org/0000-0002-6121-5529</orcidid><orcidid>https://orcid.org/0000-0003-3952-8753</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2019-10, Vol.41 (10), p.2291-2304
issn 0162-8828
1939-3539
2160-9292
language eng
recordid cdi_proquest_journals_2285331941
source IEEE Electronic Library (IEL)
subjects Acceleration
Computational modeling
Conditioning
deep learning
Deep network compression
efficient inference model
Feature maps
Image classification
Markov analysis
Markov chains
Neural networks
Pruning
reinforcement learning
Routing
Run time (computers)
Runtime
Test procedures
Training
title Runtime Network Routing for Efficient Image Classification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T13%3A19%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Runtime%20Network%20Routing%20for%20Efficient%20Image%20Classification&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Rao,%20Yongming&rft.date=2019-10-01&rft.volume=41&rft.issue=10&rft.spage=2291&rft.epage=2304&rft.pages=2291-2304&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2018.2878258&rft_dat=%3Cproquest_RIE%3E2285331941%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2285331941&rft_id=info:pmid/30371355&rft_ieee_id=8510920&rfr_iscdi=true