Knowledge transfer via distillation from time and frequency domain for time series classification

Although deep learning has achieved great success on time series classification, two issues are unsolved. First, existing methods mainly extract features in the single domain only, which means that useful information in the specific domain cannot be used. Second, multi-domain learning usually leads...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023, Vol.53 (2), p.1505-1516
Hauptverfasser: Ouyang, Kewei, Hou, Yi, Zhang, Ye, Ma, Chao, Zhou, Shilin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Although deep learning has achieved great success on time series classification, two issues are unsolved. First, existing methods mainly extract features in the single domain only, which means that useful information in the specific domain cannot be used. Second, multi-domain learning usually leads to an increase in the size of the model which makes it difficult to deploy on mobile devices. In this this study, a lightweight double-branch model, called Time Frequency Knowledge Reception Network (TFKR-Net) is proposed to simultaneously fuse information from the time and frequency domains. Instead of directly merging knowledge from the teacher models pretrained in different domains, TFKR-Net independently distills knowledge from the teacher models in the time and frequency domains, which helps maintain knowledge diversity. Experimental results on the UCR (University of California, Riverside) archive demonstrate that the TFKR-Net significantly reduces the model size and improves computational efficiency with a little performance loss in classification accuracy.
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-022-03485-5