CNCAN: Contrast and normal channel attention network for super-resolution image reconstruction of crops and weeds

Numerous studies have been performed to apply camera vision technologies in robot-based agriculture and smart farms. In particular, to obtain high accuracy, it is essential to procure high-resolution (HR) images, which requires a high-performance camera. However, due to high costs it is difficult to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Engineering applications of artificial intelligence 2024-12, Vol.138, p.109487, Article 109487
Hauptverfasser: Lee, Sung Jae, Yun, Chaeyeong, Im, Su Jin, Park, Kang Ryoung
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Numerous studies have been performed to apply camera vision technologies in robot-based agriculture and smart farms. In particular, to obtain high accuracy, it is essential to procure high-resolution (HR) images, which requires a high-performance camera. However, due to high costs it is difficult to widely apply the camera in agricultural robots. To overcome this limitation, we propose contrast and normal channel attention network (CNCAN) for super-resolution reconstruction (SR), which is the first research for the accurate semantic segmentation of crops and weeds even with low-resolution (LR) images captured by low-cost and LR camera. Attention block and activation function that considers high frequency and contrast information of images are used in CNCAN, and the residual connection method is applied to improve the learning stability. As a result of experimenting with three open datasets, namely, Bonirob, rice seedling and weed, and crop/weed field image (CWFID) datasets, the mean intersection of union (MIOU) results of semantic segmentation for crops and weeds with SR images through CNCAN were 0.7685, 0.6346, and 0.6931 in the Bonirob, rice seedling and weed, and CWFID datasets, respectively, confirming higher accuracy than other state-of-the-art methods for SR.
ISSN:0952-1976
DOI:10.1016/j.engappai.2024.109487