AFCANet: An adaptive feature concatenate attention network for multi-focus image fusion

For multi-focus image fusion, the existing deep learning based methods cannot effectively learn the texture features and semantic information of the source image to generate high-quality fused images. Thus, we develop a new adaptive feature concatenate attention network named AFCANet, which adaptive...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of King Saud University. Computer and information sciences 2023-10, Vol.35 (9), p.101751, Article 101751
Hauptverfasser: Liu, Shuaiqi, Peng, Weijian, Liu, Yali, Zhao, Jie, Su, Yonggang, Zhang, Yudong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:For multi-focus image fusion, the existing deep learning based methods cannot effectively learn the texture features and semantic information of the source image to generate high-quality fused images. Thus, we develop a new adaptive feature concatenate attention network named AFCANet, which adaptively learns cross-layer features and retains the texture features and semantic information of images to generate visually appealing fully focused images. In AFCANet, the encoder-decoder network is used as the backbone network. In the unsupervised training stage, an adaptive cross-layer skip connection mode is designed, and a cross-layer adaptive coordinate attention module is built to acquire meaningful information from the image along with ignoring unimportant information to obtain a better image fusion effect. In addition, in the middle of the encoder-decoder network, we also introduce an effective channel attention module to fully learn the output of the encoder, and accelerate network convergence. In the inference stage, we apply the pixel-based spatial frequency fusion rules to fuse the adaptive features learned by the encoder, which can successfully combine the texture and semantic information of the image and produce a more precise decision map. Extensive experiments on public datasets and the HBU-CVMDSP dataset show that our AFCANet can effectively improve the accuracy of the decision map in the focus and defocus regions, as well as improve the ability to retain the abundant details and edge features of the source image.
ISSN:1319-1578
2213-1248
DOI:10.1016/j.jksuci.2023.101751