Distilling Knowledge from CNN-Transformer Models for Enhanced Human Action Recognition
This paper presents a study on improving human action recognition through the utilization of knowledge distillation, and the combination of CNN and ViT models. The research aims to enhance the performance and efficiency of smaller student models by transferring knowledge from larger teacher models....
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a study on improving human action recognition through the
utilization of knowledge distillation, and the combination of CNN and ViT
models. The research aims to enhance the performance and efficiency of smaller
student models by transferring knowledge from larger teacher models. The
proposed method employs a Transformer vision network as the student model,
while a convolutional network serves as the teacher model. The teacher model
extracts local image features, whereas the student model focuses on global
features using an attention mechanism. The Vision Transformer (ViT)
architecture is introduced as a robust framework for capturing global
dependencies in images. Additionally, advanced variants of ViT, namely PVT,
Convit, MVIT, Swin Transformer, and Twins, are discussed, highlighting their
contributions to computer vision tasks. The ConvNeXt model is introduced as a
teacher model, known for its efficiency and effectiveness in computer vision.
The paper presents performance results for human action recognition on the
Stanford 40 dataset, comparing the accuracy and mAP of student models trained
with and without knowledge distillation. The findings illustrate that the
suggested approach significantly improves the accuracy and mAP when compared to
training networks under regular settings. These findings emphasize the
potential of combining local and global features in action recognition tasks. |
---|---|
DOI: | 10.48550/arxiv.2311.01283 |