Adherent Peanut Image Segmentation Based on Multi-Modal Fusion
Aiming at the problem of the difficult segmentation of adherent images due to the not fully convex shape of peanut pods, their complex surface texture, and their diverse structures, a multimodal fusion algorithm is proposed to achieve a 2D segmentation of adherent peanut images with the assistance o...
Gespeichert in:
Veröffentlicht in: | Sensors (Basel, Switzerland) Switzerland), 2024-07, Vol.24 (14), p.4434 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Aiming at the problem of the difficult segmentation of adherent images due to the not fully convex shape of peanut pods, their complex surface texture, and their diverse structures, a multimodal fusion algorithm is proposed to achieve a 2D segmentation of adherent peanut images with the assistance of 3D point clouds. Firstly, the point cloud of a running peanut is captured line by line using a line structured light imaging system, and its three-dimensional shape is obtained through splicing and combining it with a local surface-fitting algorithm to calculate a normal vector and curvature. Seed points are selected based on the principle of minimum curvature, and neighboring points are searched using the KD-Tree algorithm. The point cloud is filtered and segmented according to the normal angle and the curvature threshold until achieving the completion of the point cloud segmentation of the individual peanut, and then the two-dimensional contour of the individual peanut model is extracted by using the rolling method. The search template is established, multiscale feature matching is implemented on the adherent image to achieve the region localization, and finally, the segmentation region is optimized by an opening operation. The experimental results show that the algorithm improves the segmentation accuracy, and the segmentation accuracy reaches 96.8%. |
---|---|
ISSN: | 1424-8220 1424-8220 |
DOI: | 10.3390/s24144434 |