Learning to Rank Ace Neural Architectures via Normalized Discounted Cumulative Gain
One of the key challenges in Neural Architecture Search (NAS) is to efficiently rank the performances of architectures. The mainstream assessment of performance rankers uses ranking correlations (e.g., Kendall's tau), which pay equal attention to the whole space. However, the optimization goal...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | One of the key challenges in Neural Architecture Search (NAS) is to
efficiently rank the performances of architectures. The mainstream assessment
of performance rankers uses ranking correlations (e.g., Kendall's tau), which
pay equal attention to the whole space. However, the optimization goal of NAS
is identifying top architectures while paying less attention on other
architectures in the search space. In this paper, we show both empirically and
theoretically that Normalized Discounted Cumulative Gain (NDCG) is a better
metric for rankers. Subsequently, we propose a new algorithm, AceNAS, which
directly optimizes NDCG with LambdaRank. It also leverages weak labels produced
by weight-sharing NAS to pre-train the ranker, so as to further reduce search
cost. Extensive experiments on 12 NAS benchmarks and a large-scale search space
demonstrate that our approach consistently outperforms SOTA NAS methods, with
up to 3.67% accuracy improvement and 8x reduction on search cost. |
---|---|
DOI: | 10.48550/arxiv.2108.03001 |