Facial image deblurring network for robust illuminance adaptation and key structure restoration
Facial image deblurring is an active research area that aims to restore blurry face images to clear ones. However, this task requires special consideration to restore the detailed elements of facial structures, such as eyes, nose, and mouth. Additionally, facial occlusions and varying illuminance co...
Gespeichert in:
Veröffentlicht in: | Engineering applications of artificial intelligence 2024-07, Vol.133, p.107959, Article 107959 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Facial image deblurring is an active research area that aims to restore blurry face images to clear ones. However, this task requires special consideration to restore the detailed elements of facial structures, such as eyes, nose, and mouth. Additionally, facial occlusions and varying illuminance conditions in common environments can degrade the deblurring performance. Previous studies have not accounted for these conditions, necessitating the development of deblurring method that considers these factors. In this paper, we propose a novel approach, called Illuminance-robust Multi-stage DeblurNet with Channel Attention (IMDeCA), which leverages semantic mask and landmark information of the face to restore detailed facial structures. Our approach is robust to various illuminance conditions and facial occlusions. The proposed network comprises a multi-stage structure that extracts facial semantic feature maps, reconstructs clear images, and improves illuminance. We also consider facial landmark information in the loss function to ensure well-restored facial structures even in the presence of facial occlusions. Furthermore, we construct a new facial image dataset, named BIO, which includes Blurred images with various types of Illuminance conditions and facial Occlusions. Through extensive experiments on this dataset, we demonstrate the superior performance of our proposed network, outperforming the latest existing methods. |
---|---|
ISSN: | 0952-1976 1873-6769 |
DOI: | 10.1016/j.engappai.2024.107959 |