Calibration-then-Calculation: A Variance Reduced Metric Framework in Deep Click-Through Rate Prediction Models
The adoption of deep learning across various fields has been extensive, yet there is a lack of focus on evaluating the performance of deep learning pipelines. Typically, with the increased use of large datasets and complex models, the training process is run only once and the result is compared to p...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The adoption of deep learning across various fields has been extensive, yet
there is a lack of focus on evaluating the performance of deep learning
pipelines. Typically, with the increased use of large datasets and complex
models, the training process is run only once and the result is compared to
previous benchmarks. This practice can lead to imprecise comparisons due to the
variance in neural network evaluation metrics, which stems from the inherent
randomness in the training process. Traditional solutions, such as running the
training process multiple times, are often infeasible due to computational
constraints. In this paper, we introduce a novel metric framework, the
Calibrated Loss Metric, designed to address this issue by reducing the variance
present in its conventional counterpart. Consequently, this new metric enhances
the accuracy in detecting effective modeling improvements. Our approach is
substantiated by theoretical justifications and extensive experimental
validations within the context of Deep Click-Through Rate Prediction Models. |
---|---|
DOI: | 10.48550/arxiv.2401.16692 |