Learning to Rank Ace Neural Architectures via Normalized Discounted Cumulative Gain

One of the key challenges in Neural Architecture Search (NAS) is to efficiently rank the performances of architectures. The mainstream assessment of performance rankers uses ranking correlations (e.g., Kendall's tau), which pay equal attention to the whole space. However, the optimization goal...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhang, Yuge, Zhang, Quanlu, Zhang, Li Lyna, Yang, Yaming, Yan, Chenqian, Gao, Xiaotian, Yang, Yuqing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhang, Yuge
Zhang, Quanlu
Zhang, Li Lyna
Yang, Yaming
Yan, Chenqian
Gao, Xiaotian
Yang, Yuqing
description One of the key challenges in Neural Architecture Search (NAS) is to efficiently rank the performances of architectures. The mainstream assessment of performance rankers uses ranking correlations (e.g., Kendall's tau), which pay equal attention to the whole space. However, the optimization goal of NAS is identifying top architectures while paying less attention on other architectures in the search space. In this paper, we show both empirically and theoretically that Normalized Discounted Cumulative Gain (NDCG) is a better metric for rankers. Subsequently, we propose a new algorithm, AceNAS, which directly optimizes NDCG with LambdaRank. It also leverages weak labels produced by weight-sharing NAS to pre-train the ranker, so as to further reduce search cost. Extensive experiments on 12 NAS benchmarks and a large-scale search space demonstrate that our approach consistently outperforms SOTA NAS methods, with up to 3.67% accuracy improvement and 8x reduction on search cost.
doi_str_mv 10.48550/arxiv.2108.03001
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2108_03001</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2108_03001</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-336b496a9cb62f629f53705e9af2cb0fdee5272804d31cabcaad85247bd04f163</originalsourceid><addsrcrecordid>eNotz81OhDAYheFuXJjRC3A1vQGwP7TAkqCOJmRMdPbko3wdG6GYUoh69eqMq_OuTvIQcsNZmhVKsVsIn25NBWdFyiRj_JK8NgjBO3-kcaIv4N9pZZDucQkw0CqYNxfRxCXgTFcHdD-FEQb3jT29c7OZFh9_s17GZYDoVqQ7cP6KXFgYZrz-3w05PNwf6seked491VWTgM55IqXuslJDaTotrBalVTJnCkuwwnTM9ohK5KJgWS-5gc4A9IUSWd71LLNcyw3Znm9PqvYjuBHCV_una086-QMg6kq6</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning to Rank Ace Neural Architectures via Normalized Discounted Cumulative Gain</title><source>arXiv.org</source><creator>Zhang, Yuge ; Zhang, Quanlu ; Zhang, Li Lyna ; Yang, Yaming ; Yan, Chenqian ; Gao, Xiaotian ; Yang, Yuqing</creator><creatorcontrib>Zhang, Yuge ; Zhang, Quanlu ; Zhang, Li Lyna ; Yang, Yaming ; Yan, Chenqian ; Gao, Xiaotian ; Yang, Yuqing</creatorcontrib><description>One of the key challenges in Neural Architecture Search (NAS) is to efficiently rank the performances of architectures. The mainstream assessment of performance rankers uses ranking correlations (e.g., Kendall's tau), which pay equal attention to the whole space. However, the optimization goal of NAS is identifying top architectures while paying less attention on other architectures in the search space. In this paper, we show both empirically and theoretically that Normalized Discounted Cumulative Gain (NDCG) is a better metric for rankers. Subsequently, we propose a new algorithm, AceNAS, which directly optimizes NDCG with LambdaRank. It also leverages weak labels produced by weight-sharing NAS to pre-train the ranker, so as to further reduce search cost. Extensive experiments on 12 NAS benchmarks and a large-scale search space demonstrate that our approach consistently outperforms SOTA NAS methods, with up to 3.67% accuracy improvement and 8x reduction on search cost.</description><identifier>DOI: 10.48550/arxiv.2108.03001</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2021-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2108.03001$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2108.03001$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Yuge</creatorcontrib><creatorcontrib>Zhang, Quanlu</creatorcontrib><creatorcontrib>Zhang, Li Lyna</creatorcontrib><creatorcontrib>Yang, Yaming</creatorcontrib><creatorcontrib>Yan, Chenqian</creatorcontrib><creatorcontrib>Gao, Xiaotian</creatorcontrib><creatorcontrib>Yang, Yuqing</creatorcontrib><title>Learning to Rank Ace Neural Architectures via Normalized Discounted Cumulative Gain</title><description>One of the key challenges in Neural Architecture Search (NAS) is to efficiently rank the performances of architectures. The mainstream assessment of performance rankers uses ranking correlations (e.g., Kendall's tau), which pay equal attention to the whole space. However, the optimization goal of NAS is identifying top architectures while paying less attention on other architectures in the search space. In this paper, we show both empirically and theoretically that Normalized Discounted Cumulative Gain (NDCG) is a better metric for rankers. Subsequently, we propose a new algorithm, AceNAS, which directly optimizes NDCG with LambdaRank. It also leverages weak labels produced by weight-sharing NAS to pre-train the ranker, so as to further reduce search cost. Extensive experiments on 12 NAS benchmarks and a large-scale search space demonstrate that our approach consistently outperforms SOTA NAS methods, with up to 3.67% accuracy improvement and 8x reduction on search cost.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz81OhDAYheFuXJjRC3A1vQGwP7TAkqCOJmRMdPbko3wdG6GYUoh69eqMq_OuTvIQcsNZmhVKsVsIn25NBWdFyiRj_JK8NgjBO3-kcaIv4N9pZZDucQkw0CqYNxfRxCXgTFcHdD-FEQb3jT29c7OZFh9_s17GZYDoVqQ7cP6KXFgYZrz-3w05PNwf6seked491VWTgM55IqXuslJDaTotrBalVTJnCkuwwnTM9ohK5KJgWS-5gc4A9IUSWd71LLNcyw3Znm9PqvYjuBHCV_una086-QMg6kq6</recordid><startdate>20210806</startdate><enddate>20210806</enddate><creator>Zhang, Yuge</creator><creator>Zhang, Quanlu</creator><creator>Zhang, Li Lyna</creator><creator>Yang, Yaming</creator><creator>Yan, Chenqian</creator><creator>Gao, Xiaotian</creator><creator>Yang, Yuqing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210806</creationdate><title>Learning to Rank Ace Neural Architectures via Normalized Discounted Cumulative Gain</title><author>Zhang, Yuge ; Zhang, Quanlu ; Zhang, Li Lyna ; Yang, Yaming ; Yan, Chenqian ; Gao, Xiaotian ; Yang, Yuqing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-336b496a9cb62f629f53705e9af2cb0fdee5272804d31cabcaad85247bd04f163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Yuge</creatorcontrib><creatorcontrib>Zhang, Quanlu</creatorcontrib><creatorcontrib>Zhang, Li Lyna</creatorcontrib><creatorcontrib>Yang, Yaming</creatorcontrib><creatorcontrib>Yan, Chenqian</creatorcontrib><creatorcontrib>Gao, Xiaotian</creatorcontrib><creatorcontrib>Yang, Yuqing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Yuge</au><au>Zhang, Quanlu</au><au>Zhang, Li Lyna</au><au>Yang, Yaming</au><au>Yan, Chenqian</au><au>Gao, Xiaotian</au><au>Yang, Yuqing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning to Rank Ace Neural Architectures via Normalized Discounted Cumulative Gain</atitle><date>2021-08-06</date><risdate>2021</risdate><abstract>One of the key challenges in Neural Architecture Search (NAS) is to efficiently rank the performances of architectures. The mainstream assessment of performance rankers uses ranking correlations (e.g., Kendall's tau), which pay equal attention to the whole space. However, the optimization goal of NAS is identifying top architectures while paying less attention on other architectures in the search space. In this paper, we show both empirically and theoretically that Normalized Discounted Cumulative Gain (NDCG) is a better metric for rankers. Subsequently, we propose a new algorithm, AceNAS, which directly optimizes NDCG with LambdaRank. It also leverages weak labels produced by weight-sharing NAS to pre-train the ranker, so as to further reduce search cost. Extensive experiments on 12 NAS benchmarks and a large-scale search space demonstrate that our approach consistently outperforms SOTA NAS methods, with up to 3.67% accuracy improvement and 8x reduction on search cost.</abstract><doi>10.48550/arxiv.2108.03001</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2108.03001
ispartof
issn
language eng
recordid cdi_arxiv_primary_2108_03001
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title Learning to Rank Ace Neural Architectures via Normalized Discounted Cumulative Gain
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T07%3A41%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20to%20Rank%20Ace%20Neural%20Architectures%20via%20Normalized%20Discounted%20Cumulative%20Gain&rft.au=Zhang,%20Yuge&rft.date=2021-08-06&rft_id=info:doi/10.48550/arxiv.2108.03001&rft_dat=%3Carxiv_GOX%3E2108_03001%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true