Knowledge distillation-based compression method for pre-trained language model, and platform
A knowledge distillation-based compression method for a pre-trained language model, and a platform. In the method, a universal feature transfer knowledge distillation strategy is first designed, and in a process of distilling knowledge from a teacher model to a student model, feature maps of each la...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A knowledge distillation-based compression method for a pre-trained language model, and a platform. In the method, a universal feature transfer knowledge distillation strategy is first designed, and in a process of distilling knowledge from a teacher model to a student model, feature maps of each layer of the student model are approximated to features of the teacher model, with emphasis on the feature expression capacity in intermediate layers of the teacher model for small samples, and these features are used to guide the student model; then, the ability of the self-attention distribution of the teacher model to detect semantics and syntax between words is used to construct a knowledge distillation method based on self-attention crossover; and finally, in order to improve the learning quality of early-period training and the generalization ability of late-period training in the learning model, a Bernouli probability distribution-based linear transfer strategy is designed to gradually complete knowledge transfer of the feature map and self-attention distribution from the teacher to the student. By means of the present method, automatic compression is performed on a pre-trained multi-task-oriented language model, improving language model compression efficiency. |
---|