Domain Adaptive Semantic Segmentation via Entropy-Ranking and Uncertain Learning-Based Self-Training

Dear Editor, This letter develops two new self-training strategies for domain adaptive semantic segmentation, which formulate self-training into the processes of mining more training samples and reducing influence of the false pseudo-labels. Particularly, a self-training strategy based on entropy-ra...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/CAA journal of automatica sinica 2022-08, Vol.9 (8), p.1524-1527
Hauptverfasser: Peng, Chengli, Ma, Jiayi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Dear Editor, This letter develops two new self-training strategies for domain adaptive semantic segmentation, which formulate self-training into the processes of mining more training samples and reducing influence of the false pseudo-labels. Particularly, a self-training strategy based on entropy-ranking is proposed to mine intra-domain information. Thus, numerous false pseudo-labels can be exploited and rectified, and more pseudo-labels can be involved in training. Meanwhile, another novel self-training strategy is developed to handle the regions that may possess false pseudo-labels. In detail, a specific uncertain loss, that makes the network automatically decide whether the pseudo-labels are true, is proposed to improve the network optimization. Consequently, the influence of false pseudo-labels can be reduced. Experimental results prove that, compared with the baseline, the average mIoU performance gain brought by our method can attain 4.3%. Extensive benchmark experiments further highlight the effectiveness of our method against existing state-of-the-arts.
ISSN:2329-9266
2329-9274
DOI:10.1109/JAS.2022.105767