ZS-DML: Zero-Shot Deep Metric Learning approach for plant leaf disease classification

Automatic plant disease detection plays an important role in food security. Deep learning methods are able to detect precisely various types of plant diseases but at the expense of using huge amounts of resources (processors and data). Therefore, employing few-shot or zero-shot learning methods is u...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2024-05, Vol.83 (18), p.54147-54164
Hauptverfasser: Zabihzadeh, Davood, Masoudifar, Mina
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Automatic plant disease detection plays an important role in food security. Deep learning methods are able to detect precisely various types of plant diseases but at the expense of using huge amounts of resources (processors and data). Therefore, employing few-shot or zero-shot learning methods is unavoidable. Deep Metric Learning (DML) is a widely used technique for few/zero shot learning. Existing DML methods extract features from the last hidden layer of a pre-trained deep network, which increases the dependence of the specific features on the observed classes. In this paper, the general discriminative feature learning method is used to learn general features of plant leaves. Moreover, a proxy-based loss is utilized that learns the embedding without sampling phase while having a higher convergence rate. The network is trained on the Plant Village dataset where the images are split into 32 and 6 classes as source and target, respectively. The knowledge learned from the source domain is transferred to the target in a zero-shot setting. A few samples of the target domain are presented to the network as a gallery. The network is then evaluated on the target domain. The experimental results show that by presenting few or even only one sample of new classes to the network without fine-tuning step, our method can achieve a classification accuracy of 99%/80.64% for few/one image(s) per class.
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-023-17136-5