Dynamic Semantics SLAM Based on Improved Mask R-CNN
Simulation localization and mapping(SLAM) is a popular research problem in the field of driverless cars, but there are still some difficult problems to solve. The conventional SLAM algorithm does not take into account the dynamic objects in the environment, resulting in problems such as low bit pose...
Gespeichert in:
Veröffentlicht in: | IEEE access 2022, Vol.10, p.126525-126535 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Simulation localization and mapping(SLAM) is a popular research problem in the field of driverless cars, but there are still some difficult problems to solve. The conventional SLAM algorithm does not take into account the dynamic objects in the environment, resulting in problems such as low bit pose estimation. In this study, we provide a deep learning-based SLAM scheme. In order to solve the problem of inaccurate feature point extraction in a dynamic environment, this paper adopts image pyramid to distribute feature points uniformly and extracts feature points by using the adaptive thresholding method. To address the problem of incomplete dynamic object mask segmentation, an improved Mask R-CNN network was proposed to improve the integrity of the mask edges by adding an edge detection end to the Mask R-CNN network. To address the problem of incomplete dynamic feature point rejection, this study uses a motion consistency detection algorithm to detect dynamic feature points, and uses the remaining static features for bit pose estimation. The experimental results on the TUM R-GBD dataset show that the absolute trajectory error of the SLAM algorithm in this study is reduced by 93.2% on average compared to the ORB-SLAM2 algorithm. Compared to the Dyna-SLAM algorithm, the absolute trajectory error of the algorithm in this study was reduced by 36.6% on average. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2022.3226212 |