Pareto-aware Neural Architecture Generation for Diverse Computational Budgets
Designing feasible and effective architectures under diverse computational budgets, incurred by different applications/devices, is essential for deploying deep models in real-world applications. To achieve this goal, existing methods often perform an independent architecture search process for each...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Designing feasible and effective architectures under diverse computational
budgets, incurred by different applications/devices, is essential for deploying
deep models in real-world applications. To achieve this goal, existing methods
often perform an independent architecture search process for each target
budget, which is very inefficient yet unnecessary. More critically, these
independent search processes cannot share their learned knowledge (i.e., the
distribution of good architectures) with each other and thus often result in
limited search results. To address these issues, we propose a Pareto-aware
Neural Architecture Generator (PNAG) which only needs to be trained once and
dynamically produces the Pareto optimal architecture for any given budget via
inference. To train our PNAG, we learn the whole Pareto frontier by jointly
finding multiple Pareto optimal architectures under diverse budgets. Such a
joint search algorithm not only greatly reduces the overall search cost but
also improves the search results. Extensive experiments on three hardware
platforms (i.e., mobile device, CPU, and GPU) show the superiority of our
method over existing methods. |
---|---|
DOI: | 10.48550/arxiv.2210.07634 |