Revisiting Implicit Models: Sparsity Trade-offs Capability in Weight-tied Model for Vision Tasks
Implicit models such as Deep Equilibrium Models (DEQs) have garnered significant attention in the community for their ability to train infinite layer models with elegant solution-finding procedures and constant memory footprint. However, despite several attempts, these methods are heavily constraine...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Implicit models such as Deep Equilibrium Models (DEQs) have garnered
significant attention in the community for their ability to train infinite
layer models with elegant solution-finding procedures and constant memory
footprint. However, despite several attempts, these methods are heavily
constrained by model inefficiency and optimization instability. Furthermore,
fair benchmarking across relevant methods for vision tasks is missing. In this
work, we revisit the line of implicit models and trace them back to the
original weight-tied models. Surprisingly, we observe that weight-tied models
are more effective, stable, as well as efficient on vision tasks, compared to
the DEQ variants. Through the lens of these simple-yet-clean weight-tied
models, we further study the fundamental limits in the model capacity of such
models and propose the use of distinct sparse masks to improve the model
capacity. Finally, for practitioners, we offer design guidelines regarding the
depth, width, and sparsity selection for weight-tied models, and demonstrate
the generalizability of our insights to other learning paradigms. |
---|---|
DOI: | 10.48550/arxiv.2307.08013 |