Object Segmentation with Audio Context
Visual objects often have acoustic signatures that are naturally synchronized with them in audio-bearing video recordings. For this project, we explore the multimodal feature aggregation for video instance segmentation task, in which we integrate audio features into our video segmentation model to c...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Visual objects often have acoustic signatures that are naturally synchronized
with them in audio-bearing video recordings. For this project, we explore the
multimodal feature aggregation for video instance segmentation task, in which
we integrate audio features into our video segmentation model to conduct an
audio-visual learning scheme. Our method is based on existing video instance
segmentation method which leverages rich contextual information across video
frames. Since this is the first attempt to investigate the audio-visual
instance segmentation, a novel dataset, including 20 vocal classes with
synchronized video and audio recordings, is collected. By utilizing combined
decoder to fuse both video and audio features, our model shows a slight
improvements compared to the base model. Additionally, we managed to show the
effectiveness of different modules by conducting extensive ablations. |
---|---|
DOI: | 10.48550/arxiv.2301.10295 |