Distilling Vision-Language Pre-training to Collaborate with Weakly-Supervised Temporal Action Localization
Weakly-supervised temporal action localization (WTAL) learns to detect and classify action instances with only category labels. Most methods widely adopt the off-the-shelf Classification-Based Pre-training (CBP) to generate video features for action localization. However, the different optimization...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Weakly-supervised temporal action localization (WTAL) learns to detect and
classify action instances with only category labels. Most methods widely adopt
the off-the-shelf Classification-Based Pre-training (CBP) to generate video
features for action localization. However, the different optimization
objectives between classification and localization, make temporally localized
results suffer from the serious incomplete issue. To tackle this issue without
additional annotations, this paper considers to distill free action knowledge
from Vision-Language Pre-training (VLP), since we surprisingly observe that the
localization results of vanilla VLP have an over-complete issue, which is just
complementary to the CBP results. To fuse such complementarity, we propose a
novel distillation-collaboration framework with two branches acting as CBP and
VLP respectively. The framework is optimized through a dual-branch alternate
training strategy. Specifically, during the B step, we distill the confident
background pseudo-labels from the CBP branch; while during the F step, the
confident foreground pseudo-labels are distilled from the VLP branch. And as a
result, the dual-branch complementarity is effectively fused to promote a
strong alliance. Extensive experiments and ablation studies on THUMOS14 and
ActivityNet1.2 reveal that our method significantly outperforms
state-of-the-art methods. |
---|---|
DOI: | 10.48550/arxiv.2212.09335 |