Semantic Feature Matching for Robust Mapping in Agriculture
Visual Simultaneous Localization and Mapping (SLAM) systems are an essential component in agricultural robotics that enable autonomous navigation and the construction of accurate 3D maps of agricultural fields. However, lack of texture, varying illumination conditions, and lack of structure in the e...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Visual Simultaneous Localization and Mapping (SLAM) systems are an essential
component in agricultural robotics that enable autonomous navigation and the
construction of accurate 3D maps of agricultural fields. However, lack of
texture, varying illumination conditions, and lack of structure in the
environment pose a challenge for Visual-SLAM systems that rely on traditional
feature extraction and matching algorithms such as ORB or SIFT. This paper
proposes 1) an object-level feature association algorithm that enables the
creation of 3D reconstructions robustly by taking advantage of the structure in
robotic navigation in agricultural fields, and 2) An object-level SLAM system
that utilizes recent advances in deep learning-based object detection and
segmentation algorithms to detect and segment semantic objects in the
environment used as landmarks for SLAM. We test our SLAM system on a stereo
image dataset of a sorghum field. We show that our object-based feature
association algorithm enables us to map 78% of a sorghum range on average. In
contrast, with traditional visual features, we achieve an average mapped
distance of 38%. We also compare our system against ORB-SLAM2, a
state-of-the-art visual SLAM algorithm. |
---|---|
DOI: | 10.48550/arxiv.2107.04178 |