Neural Functional Transformers
The recent success of neural networks as implicit representation of data has driven growing interest in neural functionals: models that can process other neural networks as input by operating directly over their weight spaces. Nevertheless, constructing expressive and efficient neural functional arc...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The recent success of neural networks as implicit representation of data has
driven growing interest in neural functionals: models that can process other
neural networks as input by operating directly over their weight spaces.
Nevertheless, constructing expressive and efficient neural functional
architectures that can handle high-dimensional weight-space objects remains
challenging. This paper uses the attention mechanism to define a novel set of
permutation equivariant weight-space layers and composes them into deep
equivariant models called neural functional Transformers (NFTs). NFTs respect
weight-space permutation symmetries while incorporating the advantages of
attention, which have exhibited remarkable success across multiple domains. In
experiments processing the weights of feedforward MLPs and CNNs, we find that
NFTs match or exceed the performance of prior weight-space methods. We also
leverage NFTs to develop Inr2Array, a novel method for computing permutation
invariant latent representations from the weights of implicit neural
representations (INRs). Our proposed method improves INR classification
accuracy by up to $+17\%$ over existing methods. We provide an implementation
of our layers at https://github.com/AllanYangZhou/nfn. |
---|---|
DOI: | 10.48550/arxiv.2305.13546 |