Contrastive Embeddings for Neural Architectures
The performance of algorithms for neural architecture search strongly depends on the parametrization of the search space. We use contrastive learning to identify networks across different initializations based on their data Jacobians, and automatically produce the first architecture embeddings indep...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The performance of algorithms for neural architecture search strongly depends
on the parametrization of the search space. We use contrastive learning to
identify networks across different initializations based on their data
Jacobians, and automatically produce the first architecture embeddings
independent from the parametrization of the search space. Using our contrastive
embeddings, we show that traditional black-box optimization algorithms, without
modification, can reach state-of-the-art performance in Neural Architecture
Search. As our method provides a unified embedding space, we perform for the
first time transfer learning between search spaces. Finally, we show the
evolution of embeddings during training, motivating future studies into using
embeddings at different training stages to gain a deeper understanding of the
networks in a search space. |
---|---|
DOI: | 10.48550/arxiv.2102.04208 |