Fast Neural Network Adaptation via Parameter Remapping and Architecture Search
Deep neural networks achieve remarkable performance in many computer vision tasks. Most state-of-the-art (SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone, commonly pre-trained on ImageNet. However, perfo...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Fang, Jiemin Sun, Yuzhu Peng, Kangjian Zhang, Qian Li, Yuan Liu, Wenyu Wang, Xinggang |
description | Deep neural networks achieve remarkable performance in many computer vision
tasks. Most state-of-the-art (SOTA) semantic segmentation and object detection
approaches reuse neural network architectures designed for image classification
as the backbone, commonly pre-trained on ImageNet. However, performance gains
can be achieved by designing network architectures specifically for detection
and segmentation, as shown by recent neural architecture search (NAS) research
for detection and segmentation. One major challenge though, is that ImageNet
pre-training of the search space representation (a.k.a. super network) or the
searched networks incurs huge computational cost. In this paper, we propose a
Fast Neural Network Adaptation (FNA) method, which can adapt both the
architecture and parameters of a seed network (e.g. a high performing manually
designed backbone) to become a network with different depth, width, or kernels
via a Parameter Remapping technique, making it possible to utilize NAS for
detection/segmentation tasks a lot more efficiently. In our experiments, we
conduct FNA on MobileNetV2 to obtain new networks for both segmentation and
detection that clearly out-perform existing networks designed both manually and
by NAS. The total computation cost of FNA is significantly less than SOTA
segmentation/detection NAS approaches: 1737$\times$ less than DPC, 6.8$\times$
less than Auto-DeepLab and 7.4$\times$ less than DetNAS. The code is available
at https://github.com/JaminFong/FNA. |
doi_str_mv | 10.48550/arxiv.2001.02525 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2001_02525</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2001_02525</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-15f3d60f47ca4cbf46c406af98c30c134cb5dac33991105309d070d1d482cb973</originalsourceid><addsrcrecordid>eNotj81OwkAURmfjwqAP4Mp5gZY7nZm2s2wIqAkBouyby_zoRFqay4D69lZkdfJ9i5Mcxh4E5KrWGqZI3_GcFwAih0IX-patFnhMfOVPhPsR6etAn7xxOCRM8dDzc0S-QcLOJ0_81Xc4DLF_59g73pD9iMnbdCLP3zyO847dBNwf_f2VE7ZdzLez52y5fnqZNcsMy0pnQgfpSgiqsqjsLqjSKigxmNpKsEKOn3ZopTRGCNASjIMKnHCqLuzOVHLCHv-1l6B2oNgh_bR_Ye0lTP4CezhIaw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Fast Neural Network Adaptation via Parameter Remapping and Architecture Search</title><source>arXiv.org</source><creator>Fang, Jiemin ; Sun, Yuzhu ; Peng, Kangjian ; Zhang, Qian ; Li, Yuan ; Liu, Wenyu ; Wang, Xinggang</creator><creatorcontrib>Fang, Jiemin ; Sun, Yuzhu ; Peng, Kangjian ; Zhang, Qian ; Li, Yuan ; Liu, Wenyu ; Wang, Xinggang</creatorcontrib><description>Deep neural networks achieve remarkable performance in many computer vision
tasks. Most state-of-the-art (SOTA) semantic segmentation and object detection
approaches reuse neural network architectures designed for image classification
as the backbone, commonly pre-trained on ImageNet. However, performance gains
can be achieved by designing network architectures specifically for detection
and segmentation, as shown by recent neural architecture search (NAS) research
for detection and segmentation. One major challenge though, is that ImageNet
pre-training of the search space representation (a.k.a. super network) or the
searched networks incurs huge computational cost. In this paper, we propose a
Fast Neural Network Adaptation (FNA) method, which can adapt both the
architecture and parameters of a seed network (e.g. a high performing manually
designed backbone) to become a network with different depth, width, or kernels
via a Parameter Remapping technique, making it possible to utilize NAS for
detection/segmentation tasks a lot more efficiently. In our experiments, we
conduct FNA on MobileNetV2 to obtain new networks for both segmentation and
detection that clearly out-perform existing networks designed both manually and
by NAS. The total computation cost of FNA is significantly less than SOTA
segmentation/detection NAS approaches: 1737$\times$ less than DPC, 6.8$\times$
less than Auto-DeepLab and 7.4$\times$ less than DetNAS. The code is available
at https://github.com/JaminFong/FNA.</description><identifier>DOI: 10.48550/arxiv.2001.02525</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2020-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2001.02525$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2001.02525$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Fang, Jiemin</creatorcontrib><creatorcontrib>Sun, Yuzhu</creatorcontrib><creatorcontrib>Peng, Kangjian</creatorcontrib><creatorcontrib>Zhang, Qian</creatorcontrib><creatorcontrib>Li, Yuan</creatorcontrib><creatorcontrib>Liu, Wenyu</creatorcontrib><creatorcontrib>Wang, Xinggang</creatorcontrib><title>Fast Neural Network Adaptation via Parameter Remapping and Architecture Search</title><description>Deep neural networks achieve remarkable performance in many computer vision
tasks. Most state-of-the-art (SOTA) semantic segmentation and object detection
approaches reuse neural network architectures designed for image classification
as the backbone, commonly pre-trained on ImageNet. However, performance gains
can be achieved by designing network architectures specifically for detection
and segmentation, as shown by recent neural architecture search (NAS) research
for detection and segmentation. One major challenge though, is that ImageNet
pre-training of the search space representation (a.k.a. super network) or the
searched networks incurs huge computational cost. In this paper, we propose a
Fast Neural Network Adaptation (FNA) method, which can adapt both the
architecture and parameters of a seed network (e.g. a high performing manually
designed backbone) to become a network with different depth, width, or kernels
via a Parameter Remapping technique, making it possible to utilize NAS for
detection/segmentation tasks a lot more efficiently. In our experiments, we
conduct FNA on MobileNetV2 to obtain new networks for both segmentation and
detection that clearly out-perform existing networks designed both manually and
by NAS. The total computation cost of FNA is significantly less than SOTA
segmentation/detection NAS approaches: 1737$\times$ less than DPC, 6.8$\times$
less than Auto-DeepLab and 7.4$\times$ less than DetNAS. The code is available
at https://github.com/JaminFong/FNA.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OwkAURmfjwqAP4Mp5gZY7nZm2s2wIqAkBouyby_zoRFqay4D69lZkdfJ9i5Mcxh4E5KrWGqZI3_GcFwAih0IX-patFnhMfOVPhPsR6etAn7xxOCRM8dDzc0S-QcLOJ0_81Xc4DLF_59g73pD9iMnbdCLP3zyO847dBNwf_f2VE7ZdzLez52y5fnqZNcsMy0pnQgfpSgiqsqjsLqjSKigxmNpKsEKOn3ZopTRGCNASjIMKnHCqLuzOVHLCHv-1l6B2oNgh_bR_Ye0lTP4CezhIaw</recordid><startdate>20200108</startdate><enddate>20200108</enddate><creator>Fang, Jiemin</creator><creator>Sun, Yuzhu</creator><creator>Peng, Kangjian</creator><creator>Zhang, Qian</creator><creator>Li, Yuan</creator><creator>Liu, Wenyu</creator><creator>Wang, Xinggang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200108</creationdate><title>Fast Neural Network Adaptation via Parameter Remapping and Architecture Search</title><author>Fang, Jiemin ; Sun, Yuzhu ; Peng, Kangjian ; Zhang, Qian ; Li, Yuan ; Liu, Wenyu ; Wang, Xinggang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-15f3d60f47ca4cbf46c406af98c30c134cb5dac33991105309d070d1d482cb973</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Fang, Jiemin</creatorcontrib><creatorcontrib>Sun, Yuzhu</creatorcontrib><creatorcontrib>Peng, Kangjian</creatorcontrib><creatorcontrib>Zhang, Qian</creatorcontrib><creatorcontrib>Li, Yuan</creatorcontrib><creatorcontrib>Liu, Wenyu</creatorcontrib><creatorcontrib>Wang, Xinggang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fang, Jiemin</au><au>Sun, Yuzhu</au><au>Peng, Kangjian</au><au>Zhang, Qian</au><au>Li, Yuan</au><au>Liu, Wenyu</au><au>Wang, Xinggang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fast Neural Network Adaptation via Parameter Remapping and Architecture Search</atitle><date>2020-01-08</date><risdate>2020</risdate><abstract>Deep neural networks achieve remarkable performance in many computer vision
tasks. Most state-of-the-art (SOTA) semantic segmentation and object detection
approaches reuse neural network architectures designed for image classification
as the backbone, commonly pre-trained on ImageNet. However, performance gains
can be achieved by designing network architectures specifically for detection
and segmentation, as shown by recent neural architecture search (NAS) research
for detection and segmentation. One major challenge though, is that ImageNet
pre-training of the search space representation (a.k.a. super network) or the
searched networks incurs huge computational cost. In this paper, we propose a
Fast Neural Network Adaptation (FNA) method, which can adapt both the
architecture and parameters of a seed network (e.g. a high performing manually
designed backbone) to become a network with different depth, width, or kernels
via a Parameter Remapping technique, making it possible to utilize NAS for
detection/segmentation tasks a lot more efficiently. In our experiments, we
conduct FNA on MobileNetV2 to obtain new networks for both segmentation and
detection that clearly out-perform existing networks designed both manually and
by NAS. The total computation cost of FNA is significantly less than SOTA
segmentation/detection NAS approaches: 1737$\times$ less than DPC, 6.8$\times$
less than Auto-DeepLab and 7.4$\times$ less than DetNAS. The code is available
at https://github.com/JaminFong/FNA.</abstract><doi>10.48550/arxiv.2001.02525</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2001.02525 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2001_02525 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | Fast Neural Network Adaptation via Parameter Remapping and Architecture Search |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T09%3A00%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fast%20Neural%20Network%20Adaptation%20via%20Parameter%20Remapping%20and%20Architecture%20Search&rft.au=Fang,%20Jiemin&rft.date=2020-01-08&rft_id=info:doi/10.48550/arxiv.2001.02525&rft_dat=%3Carxiv_GOX%3E2001_02525%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |