An unsupervised approach for thermal to visible image translation using autoencoder and generative adversarial network

The thermal to visible image translation is essential for night-vision applications since images acquired during night-time using visible camera are relying on the amount of illumination present around the objects being observed. The poor lighting and/or illumination during night-time results into i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine vision and applications 2021-07, Vol.32 (4), Article 99
Hauptverfasser: Patel, Heena, Upla, Kishor P.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The thermal to visible image translation is essential for night-vision applications since images acquired during night-time using visible camera are relying on the amount of illumination present around the objects being observed. The poor lighting and/or illumination during night-time results into inadequate details in the acquired scene using the visible camera, and hence, they are no longer useful for high-end applications. The current research on image-to-image translation for day-time has achieved remarkable performance using deep learning methods. However, it is very challenging to obtain same performance for night-time images, especially for the situations when low/no sources of light are available. The existing state-of-the-art image-to-image methods suffer from lack of preservation of fine details and also with incorrect mapping for night-time images due to unavailability of better corresponding visible images. Therefore, a novel architecture is proposed here to provide better visual information in night-time scenarios using unsupervised training. It consists of generative adversarial networks (GANs) and Autoencoders with a newly proposed Residual Block to extract versatile features from thermal and visible images. In order to learn better visualization of night-time images, we also introduce the gradient-based loss function along with standard GAN and cycle consistency losses in the proposed method. A weight sharing concept is implied further to relate features of thermal and visible domains. The experimental validation of the proposed method implies committed qualitative improvement and quantitative performance in terms of no-reference quality metrics such as NIQE, BRISQUE, BIQAA and BLIINDS over the other existing methods. Such work could be useful to the many vision-based applications specifically for night-time situations including the surveillance systems at border.
ISSN:0932-8092
1432-1769
DOI:10.1007/s00138-021-01223-4