Deep Learning Embedded into Smart Traps for Fruit Insect Pests Detection
This article presents a novel approach to identify two species of fruit insect pests as part of a network of intelligent traps designed to monitor the population of these insects in a plantation. The proposed approach uses a simple Digital Image Processing technique to detect regions in the image th...
Gespeichert in:
Veröffentlicht in: | ACM transactions on intelligent systems and technology 2022-11, Vol.14 (1), p.1-24, Article 10 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This article presents a novel approach to identify two species of fruit insect pests as part of a network of intelligent traps designed to monitor the population of these insects in a plantation. The proposed approach uses a simple Digital Image Processing technique to detect regions in the image that are likely the monitored pests and an Artificial Neural Network to classify the regions into the right class given their characteristics. This identification is done essentially by a Convolutional Neural Network (CNN), which learns the characteristics of the insects based on their images made from the adhesive floor inside a trap. We have trained several CNN architectures, with different configurations, through a data set of images collected in the field. We aimed to find the model with the highest precision and the lowest time needed for the classification. The best performance in classification was achieved by ResNet18, with a precision of 93.55% and 91.28% for the classification of the pests focused on this study, named Ceratitis capitata and Grapholita molesta, respectively, and a 90.72%overall accuracy. Yet, the classification must be embedded on a resource-constrained system inside the trap, then we exploited SqueezeNet, MobileNet, and MNASNet architectures to achieve a model with lesser inference time and small losses in accuracy when compared to the models we assessed. We also attempted to quantize our highest precision model to reduce even more inference time in embedded systems, which achieved a precision of 88.76% and 89.73% for C. capitata and G. molesta, respectively; notwithstanding, a decrease of roughly 2% on the overall accuracy was endured. According to the expertise of our partner company, our results are worthwhile for a real-world application, since general human laborers have a precision of about 85%. |
---|---|
ISSN: | 2157-6904 2157-6912 |
DOI: | 10.1145/3552435 |