A Progressive Fusion Generative Adversarial Network for Realistic and Consistent Video Super-Resolution
How to effectively fuse temporal information from consecutive frames remains to be a non-trivial problem in video super-resolution (SR), since most existing fusion strategies (direct fusion, slow fusion, or 3D convolution) either fail to make full use of temporal information or cost too much calcula...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence 2022-05, Vol.44 (5), p.2264-2280 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2280 |
---|---|
container_issue | 5 |
container_start_page | 2264 |
container_title | IEEE transactions on pattern analysis and machine intelligence |
container_volume | 44 |
creator | Yi, Peng Wang, Zhongyuan Jiang, Kui Jiang, Junjun Lu, Tao Ma, Jiayi |
description | How to effectively fuse temporal information from consecutive frames remains to be a non-trivial problem in video super-resolution (SR), since most existing fusion strategies (direct fusion, slow fusion, or 3D convolution) either fail to make full use of temporal information or cost too much calculation. To this end, we propose a novel progressive fusion network for video SR, in which frames are processed in a way of progressive separation and fusion for the thorough utilization of spatio-temporal information. We particularly incorporate multi-scale structure and hybrid convolutions into the network to capture a wide range of dependencies. We further propose a non-local operation to extract long-range spatio-temporal correlations directly, taking place of traditional motion estimation and motion compensation (ME&MC). This design relieves the complicated ME&MC algorithms, but enjoys better performance than various ME&MC schemes. Finally, we improve generative adversarial training for video SR to avoid temporal artifacts such as flickering and ghosting. In particular, we propose a frame variation loss with a single-sequence training method to generate more realistic and temporally consistent videos. Extensive experiments on public datasets show the superiority of our method over state-of-the-art methods in terms of performance and complexity. Our code is available at https://github.com/psychopa4/MSHPFNL . |
doi_str_mv | 10.1109/TPAMI.2020.3042298 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_33270559</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9279273</ieee_id><sourcerecordid>2645983884</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-83d11e3d6d7e7c7065715b7adfe3431f1c48d726a274cd91354b360cb246a9a3</originalsourceid><addsrcrecordid>eNpdkU1rGzEQhkVoSZy0f6CFIOgll3UljbSSjsY0H5C2ITW9Cnk1G5SsV660m9J_33Xs5lAYGGbmmZdhXkI-cDbnnNnPq7vF15u5YILNgUkhrDkiM27BVqDAviEzxmtRGSPMCTkt5ZExLhWDY3ICIDRTys7Iw4Le5fSQsZT4jPRyLDH19Ap7zH7YdRbhGXPxOfqOfsPhd8pPtE2Z3qPvYhliQ30f6DL1ZaqwH-jPGDDRH-MWc3WPJXXjMEm-I29b3xV8f8hnZHX5ZbW8rm6_X90sF7dVA4oPlYHAOUKog0bdaFYrzdVa-9AiSOAtb6QJWtReaNkEy0HJNdSsWQtZe-vhjFzsZbc5_RqxDG4TS4Nd53tMY3ETpmtuGJMT-uk_9DGNuZ-Oc6KWyhowZkeJPdXkVErG1m1z3Pj8x3Hmdi64FxfczgV3cGFaOj9Ij-sNhteVf2-fgI97ICLi69gKPQXAX78Bi6c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2645983884</pqid></control><display><type>article</type><title>A Progressive Fusion Generative Adversarial Network for Realistic and Consistent Video Super-Resolution</title><source>IEEE Electronic Library (IEL)</source><creator>Yi, Peng ; Wang, Zhongyuan ; Jiang, Kui ; Jiang, Junjun ; Lu, Tao ; Ma, Jiayi</creator><creatorcontrib>Yi, Peng ; Wang, Zhongyuan ; Jiang, Kui ; Jiang, Junjun ; Lu, Tao ; Ma, Jiayi</creatorcontrib><description>How to effectively fuse temporal information from consecutive frames remains to be a non-trivial problem in video super-resolution (SR), since most existing fusion strategies (direct fusion, slow fusion, or 3D convolution) either fail to make full use of temporal information or cost too much calculation. To this end, we propose a novel progressive fusion network for video SR, in which frames are processed in a way of progressive separation and fusion for the thorough utilization of spatio-temporal information. We particularly incorporate multi-scale structure and hybrid convolutions into the network to capture a wide range of dependencies. We further propose a non-local operation to extract long-range spatio-temporal correlations directly, taking place of traditional motion estimation and motion compensation (ME&MC). This design relieves the complicated ME&MC algorithms, but enjoys better performance than various ME&MC schemes. Finally, we improve generative adversarial training for video SR to avoid temporal artifacts such as flickering and ghosting. In particular, we propose a frame variation loss with a single-sequence training method to generate more realistic and temporally consistent videos. Extensive experiments on public datasets show the superiority of our method over state-of-the-art methods in terms of performance and complexity. Our code is available at https://github.com/psychopa4/MSHPFNL .</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2020.3042298</identifier><identifier>PMID: 33270559</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Convolution ; Convolutional neural network ; Frames (data processing) ; Gallium nitride ; generative adversarial network ; Generative adversarial networks ; Image reconstruction ; Motion compensation ; Motion simulation ; Neural networks ; progressive fusion ; spatio-temporal correlation ; Three-dimensional displays ; Training ; video super-resolution</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2022-05, Vol.44 (5), p.2264-2280</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-83d11e3d6d7e7c7065715b7adfe3431f1c48d726a274cd91354b360cb246a9a3</citedby><cites>FETCH-LOGICAL-c351t-83d11e3d6d7e7c7065715b7adfe3431f1c48d726a274cd91354b360cb246a9a3</cites><orcidid>0000-0001-9366-951X ; 0000-0001-8117-2012 ; 0000-0002-4055-7503 ; 0000-0003-3264-3265 ; 0000-0002-5694-505X ; 0000-0002-9796-488X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9279273$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9279273$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33270559$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yi, Peng</creatorcontrib><creatorcontrib>Wang, Zhongyuan</creatorcontrib><creatorcontrib>Jiang, Kui</creatorcontrib><creatorcontrib>Jiang, Junjun</creatorcontrib><creatorcontrib>Lu, Tao</creatorcontrib><creatorcontrib>Ma, Jiayi</creatorcontrib><title>A Progressive Fusion Generative Adversarial Network for Realistic and Consistent Video Super-Resolution</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>How to effectively fuse temporal information from consecutive frames remains to be a non-trivial problem in video super-resolution (SR), since most existing fusion strategies (direct fusion, slow fusion, or 3D convolution) either fail to make full use of temporal information or cost too much calculation. To this end, we propose a novel progressive fusion network for video SR, in which frames are processed in a way of progressive separation and fusion for the thorough utilization of spatio-temporal information. We particularly incorporate multi-scale structure and hybrid convolutions into the network to capture a wide range of dependencies. We further propose a non-local operation to extract long-range spatio-temporal correlations directly, taking place of traditional motion estimation and motion compensation (ME&MC). This design relieves the complicated ME&MC algorithms, but enjoys better performance than various ME&MC schemes. Finally, we improve generative adversarial training for video SR to avoid temporal artifacts such as flickering and ghosting. In particular, we propose a frame variation loss with a single-sequence training method to generate more realistic and temporally consistent videos. Extensive experiments on public datasets show the superiority of our method over state-of-the-art methods in terms of performance and complexity. Our code is available at https://github.com/psychopa4/MSHPFNL .</description><subject>Algorithms</subject><subject>Convolution</subject><subject>Convolutional neural network</subject><subject>Frames (data processing)</subject><subject>Gallium nitride</subject><subject>generative adversarial network</subject><subject>Generative adversarial networks</subject><subject>Image reconstruction</subject><subject>Motion compensation</subject><subject>Motion simulation</subject><subject>Neural networks</subject><subject>progressive fusion</subject><subject>spatio-temporal correlation</subject><subject>Three-dimensional displays</subject><subject>Training</subject><subject>video super-resolution</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkU1rGzEQhkVoSZy0f6CFIOgll3UljbSSjsY0H5C2ITW9Cnk1G5SsV660m9J_33Xs5lAYGGbmmZdhXkI-cDbnnNnPq7vF15u5YILNgUkhrDkiM27BVqDAviEzxmtRGSPMCTkt5ZExLhWDY3ICIDRTys7Iw4Le5fSQsZT4jPRyLDH19Ap7zH7YdRbhGXPxOfqOfsPhd8pPtE2Z3qPvYhliQ30f6DL1ZaqwH-jPGDDRH-MWc3WPJXXjMEm-I29b3xV8f8hnZHX5ZbW8rm6_X90sF7dVA4oPlYHAOUKog0bdaFYrzdVa-9AiSOAtb6QJWtReaNkEy0HJNdSsWQtZe-vhjFzsZbc5_RqxDG4TS4Nd53tMY3ETpmtuGJMT-uk_9DGNuZ-Oc6KWyhowZkeJPdXkVErG1m1z3Pj8x3Hmdi64FxfczgV3cGFaOj9Ij-sNhteVf2-fgI97ICLi69gKPQXAX78Bi6c</recordid><startdate>20220501</startdate><enddate>20220501</enddate><creator>Yi, Peng</creator><creator>Wang, Zhongyuan</creator><creator>Jiang, Kui</creator><creator>Jiang, Junjun</creator><creator>Lu, Tao</creator><creator>Ma, Jiayi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-9366-951X</orcidid><orcidid>https://orcid.org/0000-0001-8117-2012</orcidid><orcidid>https://orcid.org/0000-0002-4055-7503</orcidid><orcidid>https://orcid.org/0000-0003-3264-3265</orcidid><orcidid>https://orcid.org/0000-0002-5694-505X</orcidid><orcidid>https://orcid.org/0000-0002-9796-488X</orcidid></search><sort><creationdate>20220501</creationdate><title>A Progressive Fusion Generative Adversarial Network for Realistic and Consistent Video Super-Resolution</title><author>Yi, Peng ; Wang, Zhongyuan ; Jiang, Kui ; Jiang, Junjun ; Lu, Tao ; Ma, Jiayi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-83d11e3d6d7e7c7065715b7adfe3431f1c48d726a274cd91354b360cb246a9a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Convolution</topic><topic>Convolutional neural network</topic><topic>Frames (data processing)</topic><topic>Gallium nitride</topic><topic>generative adversarial network</topic><topic>Generative adversarial networks</topic><topic>Image reconstruction</topic><topic>Motion compensation</topic><topic>Motion simulation</topic><topic>Neural networks</topic><topic>progressive fusion</topic><topic>spatio-temporal correlation</topic><topic>Three-dimensional displays</topic><topic>Training</topic><topic>video super-resolution</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yi, Peng</creatorcontrib><creatorcontrib>Wang, Zhongyuan</creatorcontrib><creatorcontrib>Jiang, Kui</creatorcontrib><creatorcontrib>Jiang, Junjun</creatorcontrib><creatorcontrib>Lu, Tao</creatorcontrib><creatorcontrib>Ma, Jiayi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yi, Peng</au><au>Wang, Zhongyuan</au><au>Jiang, Kui</au><au>Jiang, Junjun</au><au>Lu, Tao</au><au>Ma, Jiayi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Progressive Fusion Generative Adversarial Network for Realistic and Consistent Video Super-Resolution</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2022-05-01</date><risdate>2022</risdate><volume>44</volume><issue>5</issue><spage>2264</spage><epage>2280</epage><pages>2264-2280</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>How to effectively fuse temporal information from consecutive frames remains to be a non-trivial problem in video super-resolution (SR), since most existing fusion strategies (direct fusion, slow fusion, or 3D convolution) either fail to make full use of temporal information or cost too much calculation. To this end, we propose a novel progressive fusion network for video SR, in which frames are processed in a way of progressive separation and fusion for the thorough utilization of spatio-temporal information. We particularly incorporate multi-scale structure and hybrid convolutions into the network to capture a wide range of dependencies. We further propose a non-local operation to extract long-range spatio-temporal correlations directly, taking place of traditional motion estimation and motion compensation (ME&MC). This design relieves the complicated ME&MC algorithms, but enjoys better performance than various ME&MC schemes. Finally, we improve generative adversarial training for video SR to avoid temporal artifacts such as flickering and ghosting. In particular, we propose a frame variation loss with a single-sequence training method to generate more realistic and temporally consistent videos. Extensive experiments on public datasets show the superiority of our method over state-of-the-art methods in terms of performance and complexity. Our code is available at https://github.com/psychopa4/MSHPFNL .</abstract><cop>United States</cop><pub>IEEE</pub><pmid>33270559</pmid><doi>10.1109/TPAMI.2020.3042298</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0001-9366-951X</orcidid><orcidid>https://orcid.org/0000-0001-8117-2012</orcidid><orcidid>https://orcid.org/0000-0002-4055-7503</orcidid><orcidid>https://orcid.org/0000-0003-3264-3265</orcidid><orcidid>https://orcid.org/0000-0002-5694-505X</orcidid><orcidid>https://orcid.org/0000-0002-9796-488X</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0162-8828 |
ispartof | IEEE transactions on pattern analysis and machine intelligence, 2022-05, Vol.44 (5), p.2264-2280 |
issn | 0162-8828 1939-3539 2160-9292 |
language | eng |
recordid | cdi_pubmed_primary_33270559 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Convolution Convolutional neural network Frames (data processing) Gallium nitride generative adversarial network Generative adversarial networks Image reconstruction Motion compensation Motion simulation Neural networks progressive fusion spatio-temporal correlation Three-dimensional displays Training video super-resolution |
title | A Progressive Fusion Generative Adversarial Network for Realistic and Consistent Video Super-Resolution |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-20T19%3A28%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Progressive%20Fusion%20Generative%20Adversarial%20Network%20for%20Realistic%20and%20Consistent%20Video%20Super-Resolution&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Yi,%20Peng&rft.date=2022-05-01&rft.volume=44&rft.issue=5&rft.spage=2264&rft.epage=2280&rft.pages=2264-2280&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2020.3042298&rft_dat=%3Cproquest_RIE%3E2645983884%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2645983884&rft_id=info:pmid/33270559&rft_ieee_id=9279273&rfr_iscdi=true |