Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipse...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Various depth estimation models are now widely used on many mobile and IoT
devices for image segmentation, bokeh effect rendering, object tracking and
many other mobile tasks. Thus, it is very crucial to have efficient and
accurate depth estimation models that can run fast on low-power mobile
chipsets. In this Mobile AI challenge, the target was to develop deep
learning-based single image depth estimation solutions that can show a
real-time performance on IoT platforms and smartphones. For this, the
participants used a large-scale RGB-to-depth dataset that was collected with
the ZED stereo camera capable to generated depth maps for objects located at up
to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4
platform, where the developed solutions were able to generate VGA resolution
depth maps at up to 27 FPS while achieving high fidelity results. All models
developed in the challenge are also compatible with any Android or Linux-based
mobile devices, their detailed description is provided in this paper. |
---|---|
DOI: | 10.48550/arxiv.2211.04470 |