Increasing Autonomy of Unmanned Aircraft Systems Through the Use of Imaging Sensors
The range of missions performed by Unmanned Aircraft Systems (UAS) has been steadily growing in the past decades thanks to continued development in several disciplines. The goal of increasing the autonomy of UAS's is widening the range of tasks which can be carried out without, or with minimal,...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Dissertation |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The range of missions performed by Unmanned Aircraft Systems (UAS) has been steadily growing in the past decades thanks to continued development in several disciplines. The goal of increasing the autonomy of UAS's is widening the range of tasks which can be carried out without, or with minimal, external help. This thesis presents methods for increasing specific aspects of autonomy of UAS's operating both in outdoor and indoor environments where cameras are used as the primary sensors.
First, a method for fusing color and thermal images for object detection, geolocation and tracking for UAS's operating primarily outdoors is presented. Specifically, a method for building saliency maps where human body locations are marked as points of interest is described. Such maps can be used in emergency situations to increase the situational awareness of first responders or a robotic system itself. Additionally, the same method is applied to the problem of vehicle tracking. A generated stream of geographical locations of tracked vehicles increases situational awareness by allowing for qualitative reasoning about, for example, vehicles overtaking, entering or leaving crossings.
Second, two approaches to the UAS indoor localization problem in the absence of GPS-based positioning are presented. Both use cameras as the main sensors and enable autonomous indoor ight and navigation. The first approach takes advantage of cooperation with a ground robot to provide a UAS with its localization information. The second approach uses marker-based visual pose estimation where all computations are done onboard a small-scale aircraft which additionally increases its autonomy by not relying on external computational power. |
---|