Video image fusion method based on space-time grid
The invention relates to the technical field of video image fusion, in particular to a video image fusion method based on a space-time grid, which comprises the following steps of: performing a network event related video collection technology on videos through a CNN (Convolutional Neural Network) t...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | YAO HAIPENG QI FENG XI TIEYIN DAI DONG CHAI SHAOFU CHENG LI WEI FENGSHA LUO WEILI ZHAO YANG |
description | The invention relates to the technical field of video image fusion, in particular to a video image fusion method based on a space-time grid, which comprises the following steps of: performing a network event related video collection technology on videos through a CNN (Convolutional Neural Network) technology, realizing regional alignment and space-time alignment by using space-time grid coding, collecting related videos, performing video segmentation arrangement through the CNN technology, and performing video segmentation arrangement through the CNN technology. Whether input video data of the videos processed by the CNN technology conform to the same event point is judged, a person inputs a more specific time range and event characteristics of the event point into the GAN technology, and then whether the input video data of the videos processed by the CNN technology conform to the same event point is judged; and finally, the fused video is identified and analyzed through a video event point feature identific |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN117975213A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN117975213A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN117975213A3</originalsourceid><addsrcrecordid>eNrjZDAKy0xJzVfIzE1MT1VIKy3OzM9TyE0tychPUUhKLE5NUQDyiwsSk1N1SzJzUxXSizJTeBhY0xJzilN5oTQ3g6Kba4izh25qQX58KlhxXmpJvLOfoaG5pbmpkaGxozExagCIWCt6</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Video image fusion method based on space-time grid</title><source>esp@cenet</source><creator>YAO HAIPENG ; QI FENG ; XI TIEYIN ; DAI DONG ; CHAI SHAOFU ; CHENG LI ; WEI FENGSHA ; LUO WEILI ; ZHAO YANG</creator><creatorcontrib>YAO HAIPENG ; QI FENG ; XI TIEYIN ; DAI DONG ; CHAI SHAOFU ; CHENG LI ; WEI FENGSHA ; LUO WEILI ; ZHAO YANG</creatorcontrib><description>The invention relates to the technical field of video image fusion, in particular to a video image fusion method based on a space-time grid, which comprises the following steps of: performing a network event related video collection technology on videos through a CNN (Convolutional Neural Network) technology, realizing regional alignment and space-time alignment by using space-time grid coding, collecting related videos, performing video segmentation arrangement through the CNN technology, and performing video segmentation arrangement through the CNN technology. Whether input video data of the videos processed by the CNN technology conform to the same event point is judged, a person inputs a more specific time range and event characteristics of the event point into the GAN technology, and then whether the input video data of the videos processed by the CNN technology conform to the same event point is judged; and finally, the fused video is identified and analyzed through a video event point feature identific</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20240503&DB=EPODOC&CC=CN&NR=117975213A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,309,781,886,25568,76551</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20240503&DB=EPODOC&CC=CN&NR=117975213A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>YAO HAIPENG</creatorcontrib><creatorcontrib>QI FENG</creatorcontrib><creatorcontrib>XI TIEYIN</creatorcontrib><creatorcontrib>DAI DONG</creatorcontrib><creatorcontrib>CHAI SHAOFU</creatorcontrib><creatorcontrib>CHENG LI</creatorcontrib><creatorcontrib>WEI FENGSHA</creatorcontrib><creatorcontrib>LUO WEILI</creatorcontrib><creatorcontrib>ZHAO YANG</creatorcontrib><title>Video image fusion method based on space-time grid</title><description>The invention relates to the technical field of video image fusion, in particular to a video image fusion method based on a space-time grid, which comprises the following steps of: performing a network event related video collection technology on videos through a CNN (Convolutional Neural Network) technology, realizing regional alignment and space-time alignment by using space-time grid coding, collecting related videos, performing video segmentation arrangement through the CNN technology, and performing video segmentation arrangement through the CNN technology. Whether input video data of the videos processed by the CNN technology conform to the same event point is judged, a person inputs a more specific time range and event characteristics of the event point into the GAN technology, and then whether the input video data of the videos processed by the CNN technology conform to the same event point is judged; and finally, the fused video is identified and analyzed through a video event point feature identific</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZDAKy0xJzVfIzE1MT1VIKy3OzM9TyE0tychPUUhKLE5NUQDyiwsSk1N1SzJzUxXSizJTeBhY0xJzilN5oTQ3g6Kba4izh25qQX58KlhxXmpJvLOfoaG5pbmpkaGxozExagCIWCt6</recordid><startdate>20240503</startdate><enddate>20240503</enddate><creator>YAO HAIPENG</creator><creator>QI FENG</creator><creator>XI TIEYIN</creator><creator>DAI DONG</creator><creator>CHAI SHAOFU</creator><creator>CHENG LI</creator><creator>WEI FENGSHA</creator><creator>LUO WEILI</creator><creator>ZHAO YANG</creator><scope>EVB</scope></search><sort><creationdate>20240503</creationdate><title>Video image fusion method based on space-time grid</title><author>YAO HAIPENG ; QI FENG ; XI TIEYIN ; DAI DONG ; CHAI SHAOFU ; CHENG LI ; WEI FENGSHA ; LUO WEILI ; ZHAO YANG</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN117975213A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>YAO HAIPENG</creatorcontrib><creatorcontrib>QI FENG</creatorcontrib><creatorcontrib>XI TIEYIN</creatorcontrib><creatorcontrib>DAI DONG</creatorcontrib><creatorcontrib>CHAI SHAOFU</creatorcontrib><creatorcontrib>CHENG LI</creatorcontrib><creatorcontrib>WEI FENGSHA</creatorcontrib><creatorcontrib>LUO WEILI</creatorcontrib><creatorcontrib>ZHAO YANG</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>YAO HAIPENG</au><au>QI FENG</au><au>XI TIEYIN</au><au>DAI DONG</au><au>CHAI SHAOFU</au><au>CHENG LI</au><au>WEI FENGSHA</au><au>LUO WEILI</au><au>ZHAO YANG</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Video image fusion method based on space-time grid</title><date>2024-05-03</date><risdate>2024</risdate><abstract>The invention relates to the technical field of video image fusion, in particular to a video image fusion method based on a space-time grid, which comprises the following steps of: performing a network event related video collection technology on videos through a CNN (Convolutional Neural Network) technology, realizing regional alignment and space-time alignment by using space-time grid coding, collecting related videos, performing video segmentation arrangement through the CNN technology, and performing video segmentation arrangement through the CNN technology. Whether input video data of the videos processed by the CNN technology conform to the same event point is judged, a person inputs a more specific time range and event characteristics of the event point into the GAN technology, and then whether the input video data of the videos processed by the CNN technology conform to the same event point is judged; and finally, the fused video is identified and analyzed through a video event point feature identific</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | chi ; eng |
recordid | cdi_epo_espacenet_CN117975213A |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING PHYSICS |
title | Video image fusion method based on space-time grid |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T23%3A09%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=YAO%20HAIPENG&rft.date=2024-05-03&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN117975213A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |