Contrastive learning with feature fusion for unpaired thermal infrared image colorization

Colorizing unpaired thermal infrared images is a challenging task that existing methods struggle to perform effectively, often resulting in blurry details and unclear textures. To address these issues, we propose a novel unpaired infrared image colorization method based on contrastive learning with...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Optics and lasers in engineering 2023-11, Vol.170, p.107745, Article 107745
Hauptverfasser: Chen, Yu, Zhan, Weida, Jiang, Yichun, Zhu, Depeng, Xu, Xiaoyu, Guo, Jinxin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Colorizing unpaired thermal infrared images is a challenging task that existing methods struggle to perform effectively, often resulting in blurry details and unclear textures. To address these issues, we propose a novel unpaired infrared image colorization method based on contrastive learning with feature fusion. Our proposed method designs the multi-scale feature extraction module, the expansion enhancement module, and the region-aware attention module in the generator. The multi-scale feature extraction module enlarges the model receptive field, achieving more comprehensive feature extraction and fusing feature information of varying scales. The expansion enhancement module enhances the network recognition ability, capturing more semantic contextual information and image details. The region-aware attention module enables the network to obtain more global and local information, adaptively focusing on the feature of image semantic regions. Furthermore, we introduce a new composite loss function that combines adversarial loss, contrastive loss, and perceptual loss, utilizing texture information extracted from the color image to further enhance the details in the generated colorized image and improve image perception. Extensive experimental results on the KAIST dataset and the FLIR dataset show that our proposed method maximally retains tiny optical details in infrared images, such as surface textures, edge contours, and small targets, making the generated color images physically highly consistent with the Ground Truth, with rich and realistic details.
ISSN:0143-8166
1873-0302
DOI:10.1016/j.optlaseng.2023.107745