Custom Gradient Estimators are Straight-Through Estimators in Disguise
Quantization-aware training comes with a fundamental challenge: the derivative of quantization functions such as rounding are zero almost everywhere and nonexistent elsewhere. Various differentiable approximations of quantization functions have been proposed to address this issue. In this paper, we...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Quantization-aware training comes with a fundamental challenge: the
derivative of quantization functions such as rounding are zero almost
everywhere and nonexistent elsewhere. Various differentiable approximations of
quantization functions have been proposed to address this issue. In this paper,
we prove that when the learning rate is sufficiently small, a large class of
weight gradient estimators is equivalent with the straight through estimator
(STE). Specifically, after swapping in the STE and adjusting both the weight
initialization and the learning rate in SGD, the model will train in almost
exactly the same way as it did with the original gradient estimator. Moreover,
we show that for adaptive learning rate algorithms like Adam, the same result
can be seen without any modifications to the weight initialization and learning
rate. We experimentally show that these results hold for both a small
convolutional model trained on the MNIST dataset and for a ResNet50 model
trained on ImageNet. |
---|---|
DOI: | 10.48550/arxiv.2405.05171 |