Accelerated Neural Network Training with Rooted Logistic Objectives
Many neural networks deployed in the real world scenarios are trained using cross entropy based loss functions. From the optimization perspective, it is known that the behavior of first order methods such as gradient descent crucially depend on the separability of datasets. In fact, even in the most...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many neural networks deployed in the real world scenarios are trained using
cross entropy based loss functions. From the optimization perspective, it is
known that the behavior of first order methods such as gradient descent
crucially depend on the separability of datasets. In fact, even in the most
simplest case of binary classification, the rate of convergence depends on two
factors: (1) condition number of data matrix, and (2) separability of the
dataset. With no further pre-processing techniques such as
over-parametrization, data augmentation etc., separability is an intrinsic
quantity of the data distribution under consideration. We focus on the
landscape design of the logistic function and derive a novel sequence of {\em
strictly} convex functions that are at least as strict as logistic loss. The
minimizers of these functions coincide with those of the minimum norm solution
wherever possible. The strict convexity of the derived function can be extended
to finetune state-of-the-art models and applications. In empirical experimental
analysis, we apply our proposed rooted logistic objective to multiple deep
models, e.g., fully-connected neural networks and transformers, on various of
classification benchmarks. Our results illustrate that training with rooted
loss function is converged faster and gains performance improvements.
Furthermore, we illustrate applications of our novel rooted loss function in
generative modeling based downstream applications, such as finetuning StyleGAN
model with the rooted loss. The code implementing our losses and models can be
found here for open source software development purposes:
https://anonymous.4open.science/r/rooted_loss. |
---|---|
DOI: | 10.48550/arxiv.2310.03890 |