Efficient Noninvasive FHB Estimation using RGB Images from a Novel Multiyear, Multirater Dataset

head blight (FHB) is one of the most prevalent wheat diseases, causing substantial yield losses and health risks. Efficient phenotyping of FHB is crucial for accelerating resistance breeding, but currently used methods are time-consuming and expensive. The present article suggests a noninvasive clas...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Plant phenomics 2023, Vol.5, p.0068-0068
Hauptverfasser: Rößle, Dominik, Prey, Lukas, Ramgraber, Ludwig, Hanemann, Anja, Cremers, Daniel, Noack, Patrick Ole, Schön, Torsten
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:head blight (FHB) is one of the most prevalent wheat diseases, causing substantial yield losses and health risks. Efficient phenotyping of FHB is crucial for accelerating resistance breeding, but currently used methods are time-consuming and expensive. The present article suggests a noninvasive classification model for FHB severity estimation using red-green-blue (RGB) images, without requiring extensive preprocessing. The model accepts images taken from consumer-grade, low-cost RGB cameras and classifies the FHB severity into 6 ordinal levels. In addition, we introduce a novel dataset consisting of around 3,000 images from 3 different years (2020, 2021, and 2022) and 2 FHB severity assessments per image from independent raters. We used a pretrained EfficientNet (size b0), redesigned as a regression model. The results demonstrate that the interrater reliability (Cohen's kappa, ) is substantially lower than the achieved individual network-to-rater results, e.g., 0.68 and 0.76 for the data captured in 2020, respectively. The model shows a generalization effect when trained with data from multiple years and tested on data from an independent year. Thus, using the images from 2020 and 2021 for training and 2022 for testing, we improved the score by 0.14, the accuracy by 0.11, by 0.12, and reduced the root mean squared error by 0.5 compared to the best network trained only on a single year's data. The proposed lightweight model and methods could be deployed on mobile devices to automatically and objectively assess FHB severity with images from low-cost RGB cameras. The source code and the dataset are available at https://github.com/cvims/FHB_classification.
ISSN:2643-6515
2643-6515
DOI:10.34133/plantphenomics.0068