Sign Language Animation Splicing Model Based on LpTransformer Network

Sign language animation splicing is a hot topic.With the continuous development of machine learning technology, especially the gradual maturity of deep learning related technologies, the speed and quality of sign language animation splicing are constantly improving.When splicing sign language words...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Ji suan ji ke xue 2023-01, Vol.50 (9), p.184
Hauptverfasser: Huang, Hanqiang, Xing, Yunbing, Shen, Jianfei, Fan, Feiyi
Format: Artikel
Sprache:chi
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Sign language animation splicing is a hot topic.With the continuous development of machine learning technology, especially the gradual maturity of deep learning related technologies, the speed and quality of sign language animation splicing are constantly improving.When splicing sign language words into sentences, the corresponding animation also needs to be spliced.Traditional algorithms use distance loss to find the best splicing position when splicing animation, and use linear or spherical interpolation to generate transition frames.This splicing algorithm not only has obvious defects in efficiency and flexibility, but also gene-rates unnatural sign language animation.In order to solve the above problems, LpTransformer model is proposed to predict the splicing position and generate transition frames.Experiment results show that the prediction accuracy of LpTransformer's transition frames reaches 99%,which is superior to ConvS2S,LSTM and Transformer, and its splicing speed is five times faster than Transfor
ISSN:1002-137X