Implicit Bias in Matrix Factorization and its Explicit Realization in a New Architecture
Gradient descent for matrix factorization is known to exhibit an implicit bias toward approximately low-rank solutions. While existing theories often assume the boundedness of iterates, empirically the bias persists even with unbounded sequences. We thus hypothesize that implicit bias is driven by d...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Gradient descent for matrix factorization is known to exhibit an implicit
bias toward approximately low-rank solutions. While existing theories often
assume the boundedness of iterates, empirically the bias persists even with
unbounded sequences. We thus hypothesize that implicit bias is driven by
divergent dynamics markedly different from the convergent dynamics for data
fitting. Using this perspective, we introduce a new factorization model:
$X\approx UDV^\top$, where $U$ and $V$ are constrained within norm balls, while
$D$ is a diagonal factor allowing the model to span the entire search space.
Our experiments reveal that this model exhibits a strong implicit bias
regardless of initialization and step size, yielding truly (rather than
approximately) low-rank solutions. Furthermore, drawing parallels between
matrix factorization and neural networks, we propose a novel neural network
model featuring constrained layers and diagonal components. This model achieves
strong performance across various regression and classification tasks while
finding low-rank solutions, resulting in efficient and lightweight networks. |
---|---|
DOI: | 10.48550/arxiv.2501.16322 |