Enhanced Infield Agriculture with Interpretable Machine Learning Approaches for Crop Classification

The increasing popularity of Artificial Intelligence in recent years has led to a surge in interest in image classification, especially in the agricultural sector. With the help of Computer Vision, Machine Learning, and Deep Learning, the sector has undergone a significant transformation, leading to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Murindanyi, Sudi, Nakatumba-Nabende, Joyce, Sanya, Rahman, Nakibuule, Rose, Katumba, Andrew
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Murindanyi, Sudi
Nakatumba-Nabende, Joyce
Sanya, Rahman
Nakibuule, Rose
Katumba, Andrew
description The increasing popularity of Artificial Intelligence in recent years has led to a surge in interest in image classification, especially in the agricultural sector. With the help of Computer Vision, Machine Learning, and Deep Learning, the sector has undergone a significant transformation, leading to the development of new techniques for crop classification in the field. Despite the extensive research on various image classification techniques, most have limitations such as low accuracy, limited use of data, and a lack of reporting model size and prediction. The most significant limitation of all is the need for model explainability. This research evaluates four different approaches for crop classification, namely traditional ML with handcrafted feature extraction methods like SIFT, ORB, and Color Histogram; Custom Designed CNN and established DL architecture like AlexNet; transfer learning on five models pre-trained using ImageNet such as EfficientNetV2, ResNet152V2, Xception, Inception-ResNetV2, MobileNetV3; and cutting-edge foundation models like YOLOv8 and DINOv2, a self-supervised Vision Transformer Model. All models performed well, but Xception outperformed all of them in terms of generalization, achieving 98% accuracy on the test data, with a model size of 80.03 MB and a prediction time of 0.0633 seconds. A key aspect of this research was the application of Explainable AI to provide the explainability of all the models. This journal presents the explainability of Xception model with LIME, SHAP, and GradCAM, ensuring transparency and trustworthiness in the models' predictions. This study highlights the importance of selecting the right model according to task-specific needs. It also underscores the important role of explainability in deploying AI in agriculture, providing insightful information to help enhance AI-driven crop management strategies.
doi_str_mv 10.48550/arxiv.2408.12426
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_12426</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_12426</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_124263</originalsourceid><addsrcrecordid>eNqFzr0KwkAQBOBrLER9ACv3BYwxRkkbgqKgnX1YLxuzcF6OvfPv7Y3B3mpgZopPqekyjtJsvY4XKC9-REkaZ9EySZPNUOmtbdBqquBgayZTQX4V1ncT7kLw5NB0QyBxQgEvhuCEumFLcCQUy_YKuXPSdiV5qFuBQloHhUHvuWaNgVs7VoMajafJL0dqttuei_2855RO-IbyLr-ssmet_j8-ceZEBQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Enhanced Infield Agriculture with Interpretable Machine Learning Approaches for Crop Classification</title><source>arXiv.org</source><creator>Murindanyi, Sudi ; Nakatumba-Nabende, Joyce ; Sanya, Rahman ; Nakibuule, Rose ; Katumba, Andrew</creator><creatorcontrib>Murindanyi, Sudi ; Nakatumba-Nabende, Joyce ; Sanya, Rahman ; Nakibuule, Rose ; Katumba, Andrew</creatorcontrib><description>The increasing popularity of Artificial Intelligence in recent years has led to a surge in interest in image classification, especially in the agricultural sector. With the help of Computer Vision, Machine Learning, and Deep Learning, the sector has undergone a significant transformation, leading to the development of new techniques for crop classification in the field. Despite the extensive research on various image classification techniques, most have limitations such as low accuracy, limited use of data, and a lack of reporting model size and prediction. The most significant limitation of all is the need for model explainability. This research evaluates four different approaches for crop classification, namely traditional ML with handcrafted feature extraction methods like SIFT, ORB, and Color Histogram; Custom Designed CNN and established DL architecture like AlexNet; transfer learning on five models pre-trained using ImageNet such as EfficientNetV2, ResNet152V2, Xception, Inception-ResNetV2, MobileNetV3; and cutting-edge foundation models like YOLOv8 and DINOv2, a self-supervised Vision Transformer Model. All models performed well, but Xception outperformed all of them in terms of generalization, achieving 98% accuracy on the test data, with a model size of 80.03 MB and a prediction time of 0.0633 seconds. A key aspect of this research was the application of Explainable AI to provide the explainability of all the models. This journal presents the explainability of Xception model with LIME, SHAP, and GradCAM, ensuring transparency and trustworthiness in the models' predictions. This study highlights the importance of selecting the right model according to task-specific needs. It also underscores the important role of explainability in deploying AI in agriculture, providing insightful information to help enhance AI-driven crop management strategies.</description><identifier>DOI: 10.48550/arxiv.2408.12426</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-08</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.12426$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.12426$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Murindanyi, Sudi</creatorcontrib><creatorcontrib>Nakatumba-Nabende, Joyce</creatorcontrib><creatorcontrib>Sanya, Rahman</creatorcontrib><creatorcontrib>Nakibuule, Rose</creatorcontrib><creatorcontrib>Katumba, Andrew</creatorcontrib><title>Enhanced Infield Agriculture with Interpretable Machine Learning Approaches for Crop Classification</title><description>The increasing popularity of Artificial Intelligence in recent years has led to a surge in interest in image classification, especially in the agricultural sector. With the help of Computer Vision, Machine Learning, and Deep Learning, the sector has undergone a significant transformation, leading to the development of new techniques for crop classification in the field. Despite the extensive research on various image classification techniques, most have limitations such as low accuracy, limited use of data, and a lack of reporting model size and prediction. The most significant limitation of all is the need for model explainability. This research evaluates four different approaches for crop classification, namely traditional ML with handcrafted feature extraction methods like SIFT, ORB, and Color Histogram; Custom Designed CNN and established DL architecture like AlexNet; transfer learning on five models pre-trained using ImageNet such as EfficientNetV2, ResNet152V2, Xception, Inception-ResNetV2, MobileNetV3; and cutting-edge foundation models like YOLOv8 and DINOv2, a self-supervised Vision Transformer Model. All models performed well, but Xception outperformed all of them in terms of generalization, achieving 98% accuracy on the test data, with a model size of 80.03 MB and a prediction time of 0.0633 seconds. A key aspect of this research was the application of Explainable AI to provide the explainability of all the models. This journal presents the explainability of Xception model with LIME, SHAP, and GradCAM, ensuring transparency and trustworthiness in the models' predictions. This study highlights the importance of selecting the right model according to task-specific needs. It also underscores the important role of explainability in deploying AI in agriculture, providing insightful information to help enhance AI-driven crop management strategies.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFzr0KwkAQBOBrLER9ACv3BYwxRkkbgqKgnX1YLxuzcF6OvfPv7Y3B3mpgZopPqekyjtJsvY4XKC9-REkaZ9EySZPNUOmtbdBqquBgayZTQX4V1ncT7kLw5NB0QyBxQgEvhuCEumFLcCQUy_YKuXPSdiV5qFuBQloHhUHvuWaNgVs7VoMajafJL0dqttuei_2855RO-IbyLr-ssmet_j8-ceZEBQ</recordid><startdate>20240822</startdate><enddate>20240822</enddate><creator>Murindanyi, Sudi</creator><creator>Nakatumba-Nabende, Joyce</creator><creator>Sanya, Rahman</creator><creator>Nakibuule, Rose</creator><creator>Katumba, Andrew</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240822</creationdate><title>Enhanced Infield Agriculture with Interpretable Machine Learning Approaches for Crop Classification</title><author>Murindanyi, Sudi ; Nakatumba-Nabende, Joyce ; Sanya, Rahman ; Nakibuule, Rose ; Katumba, Andrew</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_124263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Murindanyi, Sudi</creatorcontrib><creatorcontrib>Nakatumba-Nabende, Joyce</creatorcontrib><creatorcontrib>Sanya, Rahman</creatorcontrib><creatorcontrib>Nakibuule, Rose</creatorcontrib><creatorcontrib>Katumba, Andrew</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Murindanyi, Sudi</au><au>Nakatumba-Nabende, Joyce</au><au>Sanya, Rahman</au><au>Nakibuule, Rose</au><au>Katumba, Andrew</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhanced Infield Agriculture with Interpretable Machine Learning Approaches for Crop Classification</atitle><date>2024-08-22</date><risdate>2024</risdate><abstract>The increasing popularity of Artificial Intelligence in recent years has led to a surge in interest in image classification, especially in the agricultural sector. With the help of Computer Vision, Machine Learning, and Deep Learning, the sector has undergone a significant transformation, leading to the development of new techniques for crop classification in the field. Despite the extensive research on various image classification techniques, most have limitations such as low accuracy, limited use of data, and a lack of reporting model size and prediction. The most significant limitation of all is the need for model explainability. This research evaluates four different approaches for crop classification, namely traditional ML with handcrafted feature extraction methods like SIFT, ORB, and Color Histogram; Custom Designed CNN and established DL architecture like AlexNet; transfer learning on five models pre-trained using ImageNet such as EfficientNetV2, ResNet152V2, Xception, Inception-ResNetV2, MobileNetV3; and cutting-edge foundation models like YOLOv8 and DINOv2, a self-supervised Vision Transformer Model. All models performed well, but Xception outperformed all of them in terms of generalization, achieving 98% accuracy on the test data, with a model size of 80.03 MB and a prediction time of 0.0633 seconds. A key aspect of this research was the application of Explainable AI to provide the explainability of all the models. This journal presents the explainability of Xception model with LIME, SHAP, and GradCAM, ensuring transparency and trustworthiness in the models' predictions. This study highlights the importance of selecting the right model according to task-specific needs. It also underscores the important role of explainability in deploying AI in agriculture, providing insightful information to help enhance AI-driven crop management strategies.</abstract><doi>10.48550/arxiv.2408.12426</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2408.12426
ispartof
issn
language eng
recordid cdi_arxiv_primary_2408_12426
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title Enhanced Infield Agriculture with Interpretable Machine Learning Approaches for Crop Classification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-12T23%3A10%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhanced%20Infield%20Agriculture%20with%20Interpretable%20Machine%20Learning%20Approaches%20for%20Crop%20Classification&rft.au=Murindanyi,%20Sudi&rft.date=2024-08-22&rft_id=info:doi/10.48550/arxiv.2408.12426&rft_dat=%3Carxiv_GOX%3E2408_12426%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true