TRACL: Temporal reconstruction and adaptive consistency loss for semi‐supervised video semantic segmentation

While existing supervised semantic segmentation methods have shown significant performance improvements, they heavily rely on large‐scale pixel‐level annotated data. To reduce this dependence, recent research has proposed semi‐supervised learning‐based methods that have achieved great success. Howev...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IET Image Processing 2024-02, Vol.18 (2), p.348-361
Hauptverfasser: Liang, Zhixue, Dong, Wenyong, Zhang, Bo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:While existing supervised semantic segmentation methods have shown significant performance improvements, they heavily rely on large‐scale pixel‐level annotated data. To reduce this dependence, recent research has proposed semi‐supervised learning‐based methods that have achieved great success. However, almost all these works are mainly dedicated to image semantic segmentation, while semi‐supervised video semantic segmentation (SVSS) has been barely explored. Due to the significant difference between video data and image, simply adapting semi‐supervised image semantic segmentation approaches to SVSS may neglect the inherent temporal correlations in video frames. This paper presents a novel method (named TRACL) with temporal reconstruction (TR) and adaptive consistency loss (ACL) for SVSS, aiming to fully utilize the temporal relations of internal frames in video clip. The authors’ TR method implements the reconstruction from the feature and output levels to narrow the distribution gap between internal video frames. Specifically, considering the underlying data distribution, the authors construct Gaussian models for each category, and use probability density function to obtain the similarity between different feature maps for temporal feature reconstruction. The authors’ ACL can adaptively select two pixel‐wise consistency loss including Flow Consistency Loss and Reconstruction Consistency Loss, providing stronger supervision signals for unlabelled frames during model training. Additionally, the authors extend their method to unlabelled video for more training data by employing mean‐teacher structure. Extensive experiments on three datasets including Cityscapes, Camvid and VSPW demonstrate that the authors’ proposed method outperforms previous state‐of‐the‐art methods. To fully utilize a large amount of unlabelled video frames in video semantic segmentation, the authors employ semi‐supervised learning based approach to implement the video semantic segmentation. This paper presents a novel method (named TRACL) with temporal reconstruction (TR) and adaptive consistency loss (ACL) for video semantic segmentation, aiming to leverage the temporal relations of video frames. The authors’ TR method implements the reconstruction from the feature and output levels to narrow the distribution gap between internal video frames, while their ACL can adaptively select pixel‐wise consistency loss including Flow Consistency Loss and Reconstruction Consistency Loss, providing
ISSN:1751-9659
1751-9667
DOI:10.1049/ipr2.12952