Classes Are Not Equal: An Empirical Study on Image Recognition Fairness
In this paper, we present an empirical study on image recognition fairness, i.e., extreme class accuracy disparity on balanced data like ImageNet. We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets, net...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we present an empirical study on image recognition fairness,
i.e., extreme class accuracy disparity on balanced data like ImageNet. We
experimentally demonstrate that classes are not equal and the fairness issue is
prevalent for image classification models across various datasets, network
architectures, and model capacities. Moreover, several intriguing properties of
fairness are identified. First, the unfairness lies in problematic
representation rather than classifier bias. Second, with the proposed concept
of Model Prediction Bias, we investigate the origins of problematic
representation during optimization. Our findings reveal that models tend to
exhibit greater prediction biases for classes that are more challenging to
recognize. It means that more other classes will be confused with harder
classes. Then the False Positives (FPs) will dominate the learning in
optimization, thus leading to their poor accuracy. Further, we conclude that
data augmentation and representation learning algorithms improve overall
performance by promoting fairness to some degree in image classification. The
Code is available at
https://github.com/dvlab-research/Parametric-Contrastive-Learning. |
---|---|
DOI: | 10.48550/arxiv.2402.18133 |