Sign Language Animation Splicing Model Based on LpTransformer Network

Sign language animation splicing is a hot topic.With the continuous development of machine learning technology, especially the gradual maturity of deep learning related technologies, the speed and quality of sign language animation splicing are constantly improving.When splicing sign language words...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Ji suan ji ke xue 2023-01, Vol.50 (9), p.184
Hauptverfasser: Huang, Hanqiang, Xing, Yunbing, Shen, Jianfei, Fan, Feiyi
Format: Artikel
Sprache:chi
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 9
container_start_page 184
container_title Ji suan ji ke xue
container_volume 50
creator Huang, Hanqiang
Xing, Yunbing
Shen, Jianfei
Fan, Feiyi
description Sign language animation splicing is a hot topic.With the continuous development of machine learning technology, especially the gradual maturity of deep learning related technologies, the speed and quality of sign language animation splicing are constantly improving.When splicing sign language words into sentences, the corresponding animation also needs to be spliced.Traditional algorithms use distance loss to find the best splicing position when splicing animation, and use linear or spherical interpolation to generate transition frames.This splicing algorithm not only has obvious defects in efficiency and flexibility, but also gene-rates unnatural sign language animation.In order to solve the above problems, LpTransformer model is proposed to predict the splicing position and generate transition frames.Experiment results show that the prediction accuracy of LpTransformer's transition frames reaches 99%,which is superior to ConvS2S,LSTM and Transformer, and its splicing speed is five times faster than Transfor
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2860845754</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2860845754</sourcerecordid><originalsourceid>FETCH-proquest_journals_28608457543</originalsourceid><addsrcrecordid>eNqNir0OgjAURjtoIlHe4SbOJOWfVQ3GAV1gYCNVLk0VbrGF-Poy-ADmG05yzrdijs954PlhWm-Ya6268yBMomW-w_JSSYJCkJyFRDiQGsSkNEE59uqhSMJVt9jDUVhsYfHFWBlBttNmQAM3nD7avHZs3Yneovvjlu3PeXW6eKPR7xnt1Dz1bGhJTZAlPIviNI7C_15ff0Y68g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2860845754</pqid></control><display><type>article</type><title>Sign Language Animation Splicing Model Based on LpTransformer Network</title><source>DOAJ Directory of Open Access Journals</source><creator>Huang, Hanqiang ; Xing, Yunbing ; Shen, Jianfei ; Fan, Feiyi</creator><creatorcontrib>Huang, Hanqiang ; Xing, Yunbing ; Shen, Jianfei ; Fan, Feiyi</creatorcontrib><description>Sign language animation splicing is a hot topic.With the continuous development of machine learning technology, especially the gradual maturity of deep learning related technologies, the speed and quality of sign language animation splicing are constantly improving.When splicing sign language words into sentences, the corresponding animation also needs to be spliced.Traditional algorithms use distance loss to find the best splicing position when splicing animation, and use linear or spherical interpolation to generate transition frames.This splicing algorithm not only has obvious defects in efficiency and flexibility, but also gene-rates unnatural sign language animation.In order to solve the above problems, LpTransformer model is proposed to predict the splicing position and generate transition frames.Experiment results show that the prediction accuracy of LpTransformer's transition frames reaches 99%,which is superior to ConvS2S,LSTM and Transformer, and its splicing speed is five times faster than Transfor</description><identifier>ISSN: 1002-137X</identifier><language>chi</language><publisher>Chongqing: Guojia Kexue Jishu Bu</publisher><subject>Algorithms ; Animation ; Deep learning ; Frames ; Interpolation ; Machine learning ; Sentences ; Transformers</subject><ispartof>Ji suan ji ke xue, 2023-01, Vol.50 (9), p.184</ispartof><rights>Copyright Guojia Kexue Jishu Bu 2023</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780</link.rule.ids></links><search><creatorcontrib>Huang, Hanqiang</creatorcontrib><creatorcontrib>Xing, Yunbing</creatorcontrib><creatorcontrib>Shen, Jianfei</creatorcontrib><creatorcontrib>Fan, Feiyi</creatorcontrib><title>Sign Language Animation Splicing Model Based on LpTransformer Network</title><title>Ji suan ji ke xue</title><description>Sign language animation splicing is a hot topic.With the continuous development of machine learning technology, especially the gradual maturity of deep learning related technologies, the speed and quality of sign language animation splicing are constantly improving.When splicing sign language words into sentences, the corresponding animation also needs to be spliced.Traditional algorithms use distance loss to find the best splicing position when splicing animation, and use linear or spherical interpolation to generate transition frames.This splicing algorithm not only has obvious defects in efficiency and flexibility, but also gene-rates unnatural sign language animation.In order to solve the above problems, LpTransformer model is proposed to predict the splicing position and generate transition frames.Experiment results show that the prediction accuracy of LpTransformer's transition frames reaches 99%,which is superior to ConvS2S,LSTM and Transformer, and its splicing speed is five times faster than Transfor</description><subject>Algorithms</subject><subject>Animation</subject><subject>Deep learning</subject><subject>Frames</subject><subject>Interpolation</subject><subject>Machine learning</subject><subject>Sentences</subject><subject>Transformers</subject><issn>1002-137X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNqNir0OgjAURjtoIlHe4SbOJOWfVQ3GAV1gYCNVLk0VbrGF-Poy-ADmG05yzrdijs954PlhWm-Ya6268yBMomW-w_JSSYJCkJyFRDiQGsSkNEE59uqhSMJVt9jDUVhsYfHFWBlBttNmQAM3nD7avHZs3Yneovvjlu3PeXW6eKPR7xnt1Dz1bGhJTZAlPIviNI7C_15ff0Y68g</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Huang, Hanqiang</creator><creator>Xing, Yunbing</creator><creator>Shen, Jianfei</creator><creator>Fan, Feiyi</creator><general>Guojia Kexue Jishu Bu</general><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20230101</creationdate><title>Sign Language Animation Splicing Model Based on LpTransformer Network</title><author>Huang, Hanqiang ; Xing, Yunbing ; Shen, Jianfei ; Fan, Feiyi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28608457543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>chi</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Animation</topic><topic>Deep learning</topic><topic>Frames</topic><topic>Interpolation</topic><topic>Machine learning</topic><topic>Sentences</topic><topic>Transformers</topic><toplevel>online_resources</toplevel><creatorcontrib>Huang, Hanqiang</creatorcontrib><creatorcontrib>Xing, Yunbing</creatorcontrib><creatorcontrib>Shen, Jianfei</creatorcontrib><creatorcontrib>Fan, Feiyi</creatorcontrib><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Ji suan ji ke xue</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Huang, Hanqiang</au><au>Xing, Yunbing</au><au>Shen, Jianfei</au><au>Fan, Feiyi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sign Language Animation Splicing Model Based on LpTransformer Network</atitle><jtitle>Ji suan ji ke xue</jtitle><date>2023-01-01</date><risdate>2023</risdate><volume>50</volume><issue>9</issue><spage>184</spage><pages>184-</pages><issn>1002-137X</issn><abstract>Sign language animation splicing is a hot topic.With the continuous development of machine learning technology, especially the gradual maturity of deep learning related technologies, the speed and quality of sign language animation splicing are constantly improving.When splicing sign language words into sentences, the corresponding animation also needs to be spliced.Traditional algorithms use distance loss to find the best splicing position when splicing animation, and use linear or spherical interpolation to generate transition frames.This splicing algorithm not only has obvious defects in efficiency and flexibility, but also gene-rates unnatural sign language animation.In order to solve the above problems, LpTransformer model is proposed to predict the splicing position and generate transition frames.Experiment results show that the prediction accuracy of LpTransformer's transition frames reaches 99%,which is superior to ConvS2S,LSTM and Transformer, and its splicing speed is five times faster than Transfor</abstract><cop>Chongqing</cop><pub>Guojia Kexue Jishu Bu</pub></addata></record>
fulltext fulltext
identifier ISSN: 1002-137X
ispartof Ji suan ji ke xue, 2023-01, Vol.50 (9), p.184
issn 1002-137X
language chi
recordid cdi_proquest_journals_2860845754
source DOAJ Directory of Open Access Journals
subjects Algorithms
Animation
Deep learning
Frames
Interpolation
Machine learning
Sentences
Transformers
title Sign Language Animation Splicing Model Based on LpTransformer Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T23%3A03%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sign%20Language%20Animation%20Splicing%20Model%20Based%20on%20LpTransformer%20Network&rft.jtitle=Ji%20suan%20ji%20ke%20xue&rft.au=Huang,%20Hanqiang&rft.date=2023-01-01&rft.volume=50&rft.issue=9&rft.spage=184&rft.pages=184-&rft.issn=1002-137X&rft_id=info:doi/&rft_dat=%3Cproquest%3E2860845754%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2860845754&rft_id=info:pmid/&rfr_iscdi=true