Enabling continuous planetary rover navigation through FPGA stereo and visual odometry

Safe navigation under resource constraints is a key concern for autonomous planetary rovers operating on extraterrestrial bodies. Computational power in such applications is typically constrained by the radiation hardness and energy consumption requirements. For example, even though the microprocess...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Howard, T. M., Morfopoulos, A., Morrison, J., Kuwata, Y., Villalpando, C., Matthies, L., McHenry, M.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Safe navigation under resource constraints is a key concern for autonomous planetary rovers operating on extraterrestrial bodies. Computational power in such applications is typically constrained by the radiation hardness and energy consumption requirements. For example, even though the microprocessors used for the Mars Science Laboratory (MSL) mission rover are an order of magnitude more powerful than those used for the rovers on the Mars Exploration Rovers (MER) mission, the computational power is still significantly less than that of contemporary desktop microprocessors. It is therefore important to move safely and efficiently through the environment while consuming a minimum amount of computational resources, energy and time. Perception, pose estimation, and motion planning are generally three of the most computationally expensive processes in modern autonomy navigation architectures. An example of this is on the MER where each rover must stop, acquire and process imagery to evaluate its surroundings, estimate the relative change in pose, and generate the next mobility system maneuver [1]. This paper describes improvements in the energy efficiency and speed of planetary rover autonomous traverse accomplished by converting processes typically performed by the CPU onto a Field Programmable Gate Arrays (FPGA) coprocessor. Perception algorithms in general are well suited to FPGA implementations because much of processing is naturally parallelizable. In this paper we present novel implementations of stereo and visual odometry algorithms on a FPGA. The FPGA stereo implementation is an extension of [2] that uses "random in linear out" rectification and a higher-performance interface between the rectification, filter, and disparity stages of the stereo pipeline. The improved visual odometry component utilizes a FPGA implementation of a Harris feature detector and sum of absolute differences (SAD) operator. The FPGA implementation of the stereo and visual odometry functionality have demonstrated a performance improvement of approximately three orders of magnitude compared to the MER-class avionics. These more efficient perception and pose estimation modules have been merged with motion planning techniques that allow for continuous steering and driving to navigate cluttered obstacle fields without stopping to perceive. The resulting faster visual odometry rates also allow for wheel slip to be detected earlier and more reliably. Predictions of resulting improveme
ISSN:1095-323X
2996-2358
DOI:10.1109/AERO.2012.6187041