Distill-DBDGAN: Knowledge Distillation and Adversarial Learning Framework for Defocus Blur Detection

Defocus blur detection (DBD) aims to segment the blurred regions from a given image affected by defocus blur. It is a crucial pre-processing step for various computer vision tasks. With the increasing popularity of small mobile devices, there is a need for a computationally efficient method to detec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on multimedia computing communications and applications 2023-06, Vol.19 (2s), p.1-26, Article 87
Hauptverfasser: Jonna, Sankaraganesh, Medhi, Moushumi, Sahay, Rajiv Ranjan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Defocus blur detection (DBD) aims to segment the blurred regions from a given image affected by defocus blur. It is a crucial pre-processing step for various computer vision tasks. With the increasing popularity of small mobile devices, there is a need for a computationally efficient method to detect defocus blur accurately. We propose an efficient defocus blur detection method that estimates the probability of each pixel being focused or blurred in resource-constraint devices. Despite remarkable advances made by the recent deep learning-based methods, they still suffer from several challenges such as background clutter, scale sensitivity, indistinguishable low-contrast focused regions from out-of-focus blur, and especially high computational cost and memory requirement. To address the first three challenges, we develop a novel deep network that efficiently detects blur map from the input blurred image. Specifically, we integrate multi-scale features in the deep network to resolve the scale ambiguities and simultaneously modeled the non-local structural correlations in the high-level blur features. To handle the last two issues, we eventually frame our DBD algorithm to perform knowledge distillation by transferring information from the larger teacher network to a compact student network. All the networks are adversarially trained in an end-to-end manner to enforce higher order consistencies between the output and the target distributions. Experimental results demonstrate the state-of-the-art performance of the larger teacher network, while our proposed lightweight DBD model imitates the output of the teacher network without significant loss in accuracy. The codes, pre-trained model weights, and the results will be made publicly available.
ISSN:1551-6857
1551-6865
DOI:10.1145/3557897