Video frame synthesis method and device, equipment and storage medium
The invention discloses a video frame synthesis method and device, equipment and a storage medium. Comprising the following steps: inputting a video frame sequence into a preset hybrid space-time convolutional network to obtain semantic features of the video frame sequence under different space-time...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | CHENG HUI RUAN ZHE WANG LIXUE LIU SONGPENG |
description | The invention discloses a video frame synthesis method and device, equipment and a storage medium. Comprising the following steps: inputting a video frame sequence into a preset hybrid space-time convolutional network to obtain semantic features of the video frame sequence under different space-time scales; performing feature fusion on the semantic features to obtain fused semantic features; and determining a video frame synthesis result according to the fused semantic features. According to the method, the semantic features of the video frame sequence under different spatial and temporal scales are obtained, feature fusion is performed on the semantic features to obtain the fused semantic features, and the video frame synthesis result is determined according to the fused semantic features. Compared with the existing mode of synthesizing the video frames by densely estimating the motion existing between the given video frames, the mode provided by the invention can synthesize the synthesized video frames with |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN114882416A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN114882416A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN114882416A3</originalsourceid><addsrcrecordid>eNrjZHANy0xJzVdIK0rMTVUorswryUgtzixWyE0tychPUUjMS1FISS3LTE7VUUgtLM0syE3NKwGLFpfkFyWmpwIVpmSW5vIwsKYl5hSn8kJpbgZFN9cQZw_d1IL8-NTigsTk1LzUknhnP0NDEwsLIxNDM0djYtQAAN_QMw8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Video frame synthesis method and device, equipment and storage medium</title><source>esp@cenet</source><creator>CHENG HUI ; RUAN ZHE ; WANG LIXUE ; LIU SONGPENG</creator><creatorcontrib>CHENG HUI ; RUAN ZHE ; WANG LIXUE ; LIU SONGPENG</creatorcontrib><description>The invention discloses a video frame synthesis method and device, equipment and a storage medium. Comprising the following steps: inputting a video frame sequence into a preset hybrid space-time convolutional network to obtain semantic features of the video frame sequence under different space-time scales; performing feature fusion on the semantic features to obtain fused semantic features; and determining a video frame synthesis result according to the fused semantic features. According to the method, the semantic features of the video frame sequence under different spatial and temporal scales are obtained, feature fusion is performed on the semantic features to obtain the fused semantic features, and the video frame synthesis result is determined according to the fused semantic features. Compared with the existing mode of synthesizing the video frames by densely estimating the motion existing between the given video frames, the mode provided by the invention can synthesize the synthesized video frames with</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20220809&DB=EPODOC&CC=CN&NR=114882416A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20220809&DB=EPODOC&CC=CN&NR=114882416A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>CHENG HUI</creatorcontrib><creatorcontrib>RUAN ZHE</creatorcontrib><creatorcontrib>WANG LIXUE</creatorcontrib><creatorcontrib>LIU SONGPENG</creatorcontrib><title>Video frame synthesis method and device, equipment and storage medium</title><description>The invention discloses a video frame synthesis method and device, equipment and a storage medium. Comprising the following steps: inputting a video frame sequence into a preset hybrid space-time convolutional network to obtain semantic features of the video frame sequence under different space-time scales; performing feature fusion on the semantic features to obtain fused semantic features; and determining a video frame synthesis result according to the fused semantic features. According to the method, the semantic features of the video frame sequence under different spatial and temporal scales are obtained, feature fusion is performed on the semantic features to obtain the fused semantic features, and the video frame synthesis result is determined according to the fused semantic features. Compared with the existing mode of synthesizing the video frames by densely estimating the motion existing between the given video frames, the mode provided by the invention can synthesize the synthesized video frames with</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHANy0xJzVdIK0rMTVUorswryUgtzixWyE0tychPUUjMS1FISS3LTE7VUUgtLM0syE3NKwGLFpfkFyWmpwIVpmSW5vIwsKYl5hSn8kJpbgZFN9cQZw_d1IL8-NTigsTk1LzUknhnP0NDEwsLIxNDM0djYtQAAN_QMw8</recordid><startdate>20220809</startdate><enddate>20220809</enddate><creator>CHENG HUI</creator><creator>RUAN ZHE</creator><creator>WANG LIXUE</creator><creator>LIU SONGPENG</creator><scope>EVB</scope></search><sort><creationdate>20220809</creationdate><title>Video frame synthesis method and device, equipment and storage medium</title><author>CHENG HUI ; RUAN ZHE ; WANG LIXUE ; LIU SONGPENG</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN114882416A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>CHENG HUI</creatorcontrib><creatorcontrib>RUAN ZHE</creatorcontrib><creatorcontrib>WANG LIXUE</creatorcontrib><creatorcontrib>LIU SONGPENG</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>CHENG HUI</au><au>RUAN ZHE</au><au>WANG LIXUE</au><au>LIU SONGPENG</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Video frame synthesis method and device, equipment and storage medium</title><date>2022-08-09</date><risdate>2022</risdate><abstract>The invention discloses a video frame synthesis method and device, equipment and a storage medium. Comprising the following steps: inputting a video frame sequence into a preset hybrid space-time convolutional network to obtain semantic features of the video frame sequence under different space-time scales; performing feature fusion on the semantic features to obtain fused semantic features; and determining a video frame synthesis result according to the fused semantic features. According to the method, the semantic features of the video frame sequence under different spatial and temporal scales are obtained, feature fusion is performed on the semantic features to obtain the fused semantic features, and the video frame synthesis result is determined according to the fused semantic features. Compared with the existing mode of synthesizing the video frames by densely estimating the motion existing between the given video frames, the mode provided by the invention can synthesize the synthesized video frames with</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | chi ; eng |
recordid | cdi_epo_espacenet_CN114882416A |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING PHYSICS |
title | Video frame synthesis method and device, equipment and storage medium |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T08%3A12%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=CHENG%20HUI&rft.date=2022-08-09&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN114882416A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |