A self‐distillation object segmentation method via frequency domain knowledge augmentation
Most self‐distillation methods need complex auxiliary teacher structures and require lots of training samples in object segmentation task. To solve this challenging, a self‐distillation object segmentation method via frequency domain knowledge augmentation is proposed. Firstly, an object segmentatio...
Gespeichert in:
Veröffentlicht in: | IET Computer Vision 2023-04, Vol.17 (3), p.341-351 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Most self‐distillation methods need complex auxiliary teacher structures and require lots of training samples in object segmentation task. To solve this challenging, a self‐distillation object segmentation method via frequency domain knowledge augmentation is proposed. Firstly, an object segmentation network which efficiently integrates multi‐level features is constructed. Secondly, a pixel‐wise virtual teacher generation model is proposed to drive the transferring of pixel‐wise knowledge to the object segmentation network through self‐distillation learning, so as to improve its generalisation ability. Finally, a frequency domain knowledge adaptive generation method is proposed to augment data, which utilise differentiable quantisation operator to adjust the learnable pixel‐wise quantisation table dynamically. What's more, we reveal convolutional neural network is more inclined to learn low‐frequency information during the train. Experiments on five object segmentation datasets show that the proposed method can enhance the performance of the object segmentation network effectively. The boosting performance of our method is better than recent self‐distillation methods, and the average Fβ and mIoU are increased by about 1.5% and 3.6% compared with typical feature refinement self‐distillation method.
To improve the lightweight network performance, we propose a self‐distillation object segmentation method. Our method does not need complex auxiliary teacher structures and lots of training samples. What's more, from the perspective of feature learning, we reveal CNN is more inclined to learn low‐frequency information. |
---|---|
ISSN: | 1751-9632 1751-9640 |
DOI: | 10.1049/cvi2.12170 |