Lightweight CNN combined with knowledge distillation for the accurate determination of black tea fermentation degree
[Display omitted] •Artificially determining the degree of fermentation of black tea was arbitrary.•Compare the experimental results and select the student model and teacher model.•Focal Loss was introduced to improve the discriminative performance of the model.•The best result was achieved with the...
Gespeichert in:
Veröffentlicht in: | Food research international 2024-10, Vol.194, p.114929, Article 114929 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | [Display omitted]
•Artificially determining the degree of fermentation of black tea was arbitrary.•Compare the experimental results and select the student model and teacher model.•Focal Loss was introduced to improve the discriminative performance of the model.•The best result was achieved with the MGD method at distillation loss rate of 0.8.
Black tea is the second most common type of tea in China. Fermentation is one of the most critical processes in its production, and it affects the quality of the finished product, whether it is insufficient or excessive. At present, the determination of black tea fermentation degree completely relies on artificial experience. It leads to inconsistent quality of black tea. To solve this problem, we use machine vision technology to distinguish the degree of fermentation of black tea based on images, this paper proposes a lightweight convolutional neural network (CNN) combined with knowledge distillation to discriminate the degree of fermentation of black tea. After comparing 12 kinds of CNN models, taking into account the size of the model and the performance of discrimination, as well as the selection principle of teacher models, Shufflenet_v2_x1.0 is selected as the student model, and Efficientnet_v2 is selected as the teacher model. Then, CrossEntropy Loss is replaced by Focal Loss. Finally, for Distillation Loss ratios of 0.6, 0.7, 0.8, 0.9, Soft Target Knowledge Distillation (ST), Masked Generative Distillation (MGD), Similarity-Preserving Knowledge Distillation (SPKD), and Attention Transfer (AT) four knowledge distillation methods are tested for their performance in distilling knowledge from the Shufflenet_v2_x1.0 model. The results show that the model discrimination performance after distillation is the best when the Distillation Loss ratio is 0.8 and the MGD method is used. This setup effectively improves the discrimination performance without increasing the number of parameters and computation volume. The model’s P, R and F1 values reach 0.9208, 0.9190 and 0.9192, respectively. It achieves precise discrimination of the fermentation degree of black tea. This meets the requirements of objective black tea fermentation judgment and provides technical support for the intelligent processing of black tea. |
---|---|
ISSN: | 0963-9969 1873-7145 1873-7145 |
DOI: | 10.1016/j.foodres.2024.114929 |