Fusing Multi-sensor Input with State Information on TinyML Brains for Autonomous Nano-drones
Autonomous nano-drones (~10 cm in diameter), thanks to their ultra-low power TinyML-based brains, are capable of coping with real-world environments. However, due to their simplified sensors and compute units, they are still far from the sense-and-act capabilities shown in their bigger counterparts....
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Autonomous nano-drones (~10 cm in diameter), thanks to their ultra-low power
TinyML-based brains, are capable of coping with real-world environments.
However, due to their simplified sensors and compute units, they are still far
from the sense-and-act capabilities shown in their bigger counterparts. This
system paper presents a novel deep learning-based pipeline that fuses
multi-sensorial input (i.e., low-resolution images and 8x8 depth map) with the
robot's state information to tackle a human pose estimation task. Thanks to our
design, the proposed system -- trained in simulation and tested on a real-world
dataset -- improves a state-unaware State-of-the-Art baseline by increasing the
R^2 regression metric up to 0.10 on the distance's prediction. |
---|---|
DOI: | 10.48550/arxiv.2404.02567 |