MobileInst: Video Instance Segmentation on the Mobile
Video instance segmentation on mobile devices is an important yet very challenging edge AI problem. It mainly suffers from (1) heavy computation and memory costs for frame-by-frame pixel-level instance perception and (2) complicated heuristics for tracking objects. To address those issues, we presen...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Video instance segmentation on mobile devices is an important yet very
challenging edge AI problem. It mainly suffers from (1) heavy computation and
memory costs for frame-by-frame pixel-level instance perception and (2)
complicated heuristics for tracking objects. To address those issues, we
present MobileInst, a lightweight and mobile-friendly framework for video
instance segmentation on mobile devices. Firstly, MobileInst adopts a mobile
vision transformer to extract multi-level semantic features and presents an
efficient query-based dual-transformer instance decoder for mask kernels and a
semantic-enhanced mask decoder to generate instance segmentation per frame.
Secondly, MobileInst exploits simple yet effective kernel reuse and kernel
association to track objects for video instance segmentation. Further, we
propose temporal query passing to enhance the tracking ability for kernels. We
conduct experiments on COCO and YouTube-VIS datasets to demonstrate the
superiority of MobileInst and evaluate the inference latency on one single CPU
core of Snapdragon 778G Mobile Platform, without other methods of acceleration.
On the COCO dataset, MobileInst achieves 31.2 mask AP and 433 ms on the mobile
CPU, which reduces the latency by 50% compared to the previous SOTA. For video
instance segmentation, MobileInst achieves 35.0 AP on YouTube-VIS 2019 and 30.1
AP on YouTube-VIS 2021. Code will be available to facilitate real-world
applications and future research. |
---|---|
DOI: | 10.48550/arxiv.2303.17594 |