Eagle: End-to-end Deep Reinforcement Learning based Autonomous Control of PTZ Cameras
Existing approaches for autonomous control of pan-tilt-zoom (PTZ) cameras use multiple stages where object detection and localization are performed separately from the control of the PTZ mechanisms. These approaches require manual labels and suffer from performance bottlenecks due to error propagati...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing approaches for autonomous control of pan-tilt-zoom (PTZ) cameras use
multiple stages where object detection and localization are performed
separately from the control of the PTZ mechanisms. These approaches require
manual labels and suffer from performance bottlenecks due to error propagation
across the multi-stage flow of information. The large size of object detection
neural networks also makes prior solutions infeasible for real-time deployment
in resource-constrained devices. We present an end-to-end deep reinforcement
learning (RL) solution called Eagle to train a neural network policy that
directly takes images as input to control the PTZ camera. Training
reinforcement learning is cumbersome in the real world due to labeling effort,
runtime environment stochasticity, and fragile experimental setups. We
introduce a photo-realistic simulation framework for training and evaluation of
PTZ camera control policies. Eagle achieves superior camera control performance
by maintaining the object of interest close to the center of captured images at
high resolution and has up to 17% more tracking duration than the
state-of-the-art. Eagle policies are lightweight (90x fewer parameters than
Yolo5s) and can run on embedded camera platforms such as Raspberry PI (33 FPS)
and Jetson Nano (38 FPS), facilitating real-time PTZ tracking for
resource-constrained environments. With domain randomization, Eagle policies
trained in our simulator can be transferred directly to real-world scenarios. |
---|---|
DOI: | 10.48550/arxiv.2304.04356 |