Classification of Crops through Self-Supervised Decomposition for Transfer Learning

The 2S-DT (Self-Supervised Decomposition for Transfer Learning) model, created for crop categorization using remotely sensed data, is a unique method introduced in this paper. It deals with the difficulty of incorrectly identifying crops with comparable phenology patterns, a problem that frequently...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of Aridland Agriculture 2023-10, p.81-91
Hauptverfasser: Jayanth, J., Ravikiran, H. K., Madhu, K. M.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The 2S-DT (Self-Supervised Decomposition for Transfer Learning) model, created for crop categorization using remotely sensed data, is a unique method introduced in this paper. It deals with the difficulty of incorrectly identifying crops with comparable phenology patterns, a problem that frequently arises in agricultural remote sensing. Two datasets from Nanajangudu taluk in the Mysore district, which has a widely varied irrigated agriculture system, are used to assess the model. Using self-supervised learning, the 2S-DT model addresses the misclassification issue that frequently occurs when working with unlabeled classes, especially in high-resolution images. It uses class decomposition (CD) layer and a downstream learning approach. Using the model’s learning and the particulars of each geographical context, this layer improves the information’s arrangement. Our model architecture’s foundation is ResNet, a well-known deep learning framework. Each residual block in our ResNet architecture is made up of two 3x3 convolutional layers. Each convolutional layer is followed by batch normalization and Rectified Linear Unit (ReLU) activation functions, which improve the model’s capacity for learning. We utilized a 7x7 convolutional layer with 64 filters and a stride of 2 for Conv1 in ResNet18, resulting in an output size of 112x112x64. Conv2, which consists of Res2a and Res2b, generated an output with the dimensions 48x48x64. Conv3, which included Res3a and Res3b, produced an output with the dimensions 28x28x128. These architectural selections were made with our experimental needs in mind. The 2S-DT model’s newly added features make it easier to identify classes and update weights, improving the stability of the features’ spatial and spectral data. Extensive tests performed on two datasets show the model’s viability. Overall accuracy has improved significantly, with the 2S-DT model surpassing comparable models like TVSM, 3DCAE, and GAN Model by obtaining 95.65% accuracy for dataset 1 and 88.91% accuracy for dataset 2.
ISSN:2455-9377
2455-9377
DOI:10.25081/jaa.2023.v9.8566