PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search
The wide application of pre-trained models is driving the trend of once-for-all training in one-shot neural architecture search (NAS). However, training within a huge sample space damages the performance of individual subnets and requires much computation to search for an optimal model. In this pape...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The wide application of pre-trained models is driving the trend of
once-for-all training in one-shot neural architecture search (NAS). However,
training within a huge sample space damages the performance of individual
subnets and requires much computation to search for an optimal model. In this
paper, we present PreNAS, a search-free NAS approach that accentuates target
models in one-shot training. Specifically, the sample space is dramatically
reduced in advance by a zero-cost selector, and weight-sharing one-shot
training is performed on the preferred architectures to alleviate update
conflicts. Extensive experiments have demonstrated that PreNAS consistently
outperforms state-of-the-art one-shot NAS competitors for both Vision
Transformer and convolutional architectures, and importantly, enables instant
specialization with zero search cost. Our code is available at
https://github.com/tinyvision/PreNAS. |
---|---|
DOI: | 10.48550/arxiv.2304.14636 |