Late fusion of multimodal deep neural networks for weeds classification

•Developing methods for a late fusion of multiple Deep Neural Network models for better performance.•Proposing methods to determine priority weights for models.•Comparison between methods to determine optimal solution.•Allow to classify in near real-time. In agriculture, many types of weeds have a h...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers and electronics in agriculture 2020-08, Vol.175, p.105506, Article 105506
Hauptverfasser: Hoang Trong, Vo, Gwang-hyun, Yu, Thanh Vu, Dang, Jin-young, Kim
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Developing methods for a late fusion of multiple Deep Neural Network models for better performance.•Proposing methods to determine priority weights for models.•Comparison between methods to determine optimal solution.•Allow to classify in near real-time. In agriculture, many types of weeds have a harmful impact on agricultural productivity. Recognizing weeds and understanding the threat they pose to farmlands is a significant challenge because many weeds are quite similar in their external structure, making it difficult to classify them. A weeds classification approach with high accuracy and quick processing should be incorporated into automatic devices in smart agricultural systems to solve this problem. In this study, we develop a novel classification approach via a voting method by using the late fusion of multimodal Deep Neural Networks (DNNs). The score vector used for voting is calculated by either using Bayesian conditional probability-based method or by determining priority weights so that better DNNs models have a higher contribution to scoring. We experimentally studied the Plant Seedlings and Chonnam National University (CNU) Weeds datasets with 5 DNN models: NASNet, Resnet, Inception–Resnet, Mobilenet, and VGG. The results show that our methods achieved an accuracy of 97.31% on the Plant Seedlings dataset, and 98.77% accuracy on the CNU Weeds dataset. Furthermore, our framework can classify an image in near real-time.
ISSN:0168-1699
1872-7107
DOI:10.1016/j.compag.2020.105506