Extraction of soybean planting area based on feature fusion technology of multi-source low altitude unmanned aerial vehicle images
Soybean is an important food and oil crop in the world. It is of great significance to statics the planting scale accurately for optimizing the crop planting structure and world food security. The technology of accurately extracting the area of soybean planting areas at the field scale using UAV ima...
Gespeichert in:
Veröffentlicht in: | Ecological informatics 2022-09, Vol.70, p.101715, Article 101715 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Soybean is an important food and oil crop in the world. It is of great significance to statics the planting scale accurately for optimizing the crop planting structure and world food security. The technology of accurately extracting the area of soybean planting areas at the field scale using UAV images combined with deep learning algorithms is important for the application. In this study, firstly, RGB images and multispectral images (RGN) were acquired simultaneously by the quad-rotor UAV DJ-Phantom4 Pro at a flying height of 200 m. And the features were extracted from the RGB and RGN images. Further, the fusion image of RGB + VIs and the fusion image of RGN + VIs were obtained by concatenating the band reflectivity of the original image with the calculated Vegetation Index (VI). Then, the soybean planting area was segmented from the feature fusion images by U-Net. And the accuracy of the two sensors was compared. The results showed that the Kappa coefficients obtained based on RGB image, RGN image, CME(the combination of CIVE, MExG, and ExGR), ODR(the combination of OSAVI, DVI, and RDVI), RGB + CME(the combination of RGB and CME), and RGN + ODR(the combination of RGN and ODR) were 0.8806, 0.9327, 0.8437, 0.9330, 0.9420, and 0.9238, respectively. The Kappa coefficient of the combination of the original image and the vegetation index was higher than the original image, indicating that the vegetation index calculation was beneficial to improving the soybean recognition accuracy of the U-Net model. Among them, the precision of the soybean planting area extracted from RGB + CME was the highest, and the Kappa coefficient was 0.9420. Finally, the soybean recognition accuracy of U-Net was compared with the results of DeepLabv3+, Random Forest, and Support Vector Machine. The accuracy of U-Net was the best. It can be concluded that this research proposed the method that was using U-Net trained the fusion image of the original image and vegetation index feature fusion image obtained by the UAV platform, which can effectively segment soybean planting areas. The conclusion of this work provided important technical support for farm level, family cooperatives, and other business entities to manage finely soybean planting and production at low cost.
•U-Net has high recognition accuracy in soybean recognition.•Visible light sensor could replace multispectral sensor to identify soybeans.•Vegetation indexes help to improve algorithm accuracy. |
---|---|
ISSN: | 1574-9541 |
DOI: | 10.1016/j.ecoinf.2022.101715 |