Learning Type-2 Fuzzy Logic for Factor Graph Based-Robust Pose Estimation With Multi-Sensor Fusion

Although a wide variety of high-performance state estimation techniques have been introduced recently, the robustness and extension to actual conditions of the estimation systems have been challenging. This paper presents a robust adaptive state estimation framework based on the Type-2 fuzzy inferen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems 2023-04, Vol.24 (4), p.1-13
Hauptverfasser: Nam, Dinh Van, Gon-Woo, Kim
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Although a wide variety of high-performance state estimation techniques have been introduced recently, the robustness and extension to actual conditions of the estimation systems have been challenging. This paper presents a robust adaptive state estimation framework based on the Type-2 fuzzy inference system and factor graph optimization for autonomous mobile robots. We use the hybrid solution to connect the advantages of the tightly and loosely coupled technique by providing an inertial sensor and other extrinsic sensors such as LiDARs and cameras. In order to tackle the uncertainty input covariance and sensor failures problems, a learnable observation model is introduced by joining the Type-2 FIS and factor graph optimization. In particular, the use of Type-2 Takagi-Sugeno FIS can learn the uncertainty by using particle swarm optimization before adding the observation model to the factor graph. The proposed design consists of four parts: sensor odometry, up-sampling, FIS based-learning observation model, and factor graph-based smoothing. We evaluate our system by using a mobile robot platform equipped with a sensor setup of multiple stereo cameras, an IMU, and a LiDAR sensor. We imitate the LiDAR odometry in structure environments without needing other bulky motion capture systems to learn the observation model of the visual-inertial estimators. The experimental results are deployed in real-world environments to present the accuracy and robustness of the algorithm.
ISSN:1524-9050
1558-0016
DOI:10.1109/TITS.2023.3234595