Image Dehazing Assessment: A Real-World Dataset and a Haze Density-Aware Criteria
Full-reference image dehazing quality assessment (FR-IDQA) evaluates the visual quality of a dehazed image by measuring its differences with a clear reference. The existing FR-IDQA methods are not convincing due to the lack of well-aligned datasets of hazy and clear image pairs and the limited hand-...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on multimedia 2024, Vol.26, p.6037-6049 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Full-reference image dehazing quality assessment (FR-IDQA) evaluates the visual quality of a dehazed image by measuring its differences with a clear reference. The existing FR-IDQA methods are not convincing due to the lack of well-aligned datasets of hazy and clear image pairs and the limited hand-crafted features make it difficult to simulate the complicated perception by the human visual system (HVS). In this work, we build a real-world image dataset, namely RW-Haze, which comprises natural hazy images and their well-aligned clear references. Each clear image is paired with several hazy images with diverse haze levels from slight to heavy. Meanwhile, the existing FR-IDQA works evaluate the dehazed image quality in a global manner, without considering local haze distributions in the original hazy image. Actually, the perceived haze in a natural hazy image is not uniformly distributed, and the haze density varies with scene depth. Based on this priori observation, we design a haze density-aware convolutional neural network (CNN), namely DehIQA, for FR-IDQA. It adopts transfer learning to alleviate the issue of lacking sufficient labeled data. Specifically, we divide image dehazing assessment into two tasks. The source task is to classify unpaired clear and hazy images, which enforces the deep network to learn haze-related features. The target task is image quality assessment, which is achieved by transferring the trained model for the source task to the target task. Considering the fact that the perceived distortion in a dehazed image is also not uniform, we present a haze density-aware mechanism into DehIQA, which assigns different weights for different local regions in a dehazed image in terms of the dark channel of the original hazy image. Extensive experimental results show that DehIQA outperforms the state-of-the-art (SOTA) works on the benchmark dataset and achieves better consistency with human perceptions. |
---|---|
ISSN: | 1520-9210 1941-0077 |
DOI: | 10.1109/TMM.2023.3345141 |