Deep Blind Image Quality Assessment Powered by Online Hard Example Mining
Recently, blind image quality assessment (BIQA) models based on deep neural networks (DNNs) have achieved impressive performance on existing datasets. However, due to the intrinsic imbalance property of the training set, not all distortions or images are handled equally well. Online hard example min...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on multimedia 2023-01, Vol.25, p.1-11 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, blind image quality assessment (BIQA) models based on deep neural networks (DNNs) have achieved impressive performance on existing datasets. However, due to the intrinsic imbalance property of the training set, not all distortions or images are handled equally well. Online hard example mining (OHEM) is a promising way to alleviate this issue. Inspired by the recent finding that network pruning disproportionately hampers the model's memorization of a tractable subset, atypical, low-quality, long-tailed samples, that are hard-to-memorize during training and easily "forgotten" during pruning, we propose an effective "plug-and-play" OHEM pipeline, especially for generalizable deep BIQA. Specifically, we train two parallel weight-sharing branches simultaneously, where one is full model and other is a "self-competitor" generated from the full model online by network pruning. Then, we leverage the prediction disagreement between the full model and its pruned variant (i.e., the self-competitor) to expose easily "forgettable" samples, which are therefore regarded as the hard ones. We then enforce the prediction consistency between the full model and its pruned variant to implicitly put more focus on these hard samples, which benefits the full model to recover forgettable information introduced by pruning. Extensive experiments across multiple datasets and BIQA models demonstrate that the proposed OHEM can improve the model performance and generalizability as measured by correlation numbers and group maximum differentiation (gMAD) competition. Our code are available at: https://github.com/wangzhihua520/IQA_with_OHEM |
---|---|
ISSN: | 1520-9210 1941-0077 |
DOI: | 10.1109/TMM.2023.3257564 |