BlockNet: A Deep Neural Network for Block-Based Motion Estimation Using Representative Matching

Owing to the limitations of practical realizations, block-based motion is widely used as an alternative for pixel-based motion in video applications such as global motion estimation and frame rate up-conversion. We hereby present BlockNet, a compact but effective deep neural architecture for block-b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Symmetry (Basel) 2020-05, Vol.12 (5), p.840, Article 840
Hauptverfasser: Lee, Junggi, Kong, Kyeongbo, Bae, Gyujin, Song, Woo-Jin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 5
container_start_page 840
container_title Symmetry (Basel)
container_volume 12
creator Lee, Junggi
Kong, Kyeongbo
Bae, Gyujin
Song, Woo-Jin
description Owing to the limitations of practical realizations, block-based motion is widely used as an alternative for pixel-based motion in video applications such as global motion estimation and frame rate up-conversion. We hereby present BlockNet, a compact but effective deep neural architecture for block-based motion estimation. First, BlockNet extracts rich features for a pair of input images. Then, it estimates coarse-to-fine block motion using a pyramidal structure. In each level, block-based motion is estimated using the proposed representative matching with a simple average operator. The experimental results show that BlockNet achieved a similar average end-point error with and without representative matching, whereas the proposed matching incurred 18% lower computational cost than full matching.
doi_str_mv 10.3390/sym12050840
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2406254751</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A629831655</galeid><doaj_id>oai_doaj_org_article_c886e94613bc421388310ad6b03a2789</doaj_id><sourcerecordid>A629831655</sourcerecordid><originalsourceid>FETCH-LOGICAL-c403t-1dd176b17625a6e4be94cfdb893cb65f59fa864ccbfb343399482d905de5b5133</originalsourceid><addsrcrecordid>eNqNUV1rGzEQFKGFBjdP_QMHeSyX6Nu6vjlu2gaSFErzLPSxcuXYJ1eSG_Lvo_hKmsdKiF2GndkRg9AHgs8YG_B5edwSigVWHB-hY4rnrFfDwN-86t-hk1LWuB2BBZf4GOmLTXL3t1A_dYvuM8Cuu4V9NptW6kPK911IuTvM9BemgO9uUo1p7C5LjVtzaO9KHFfdD9hlKDDWBv6B7sZU96vh79HbYDYFTv7WGbr7cvlz-a2__v71arm47h3HrPbEezKXtj0qjARuYeAueKsG5qwUQQzBKMmds8Ey3r47cEX9gIUHYQVhbIauJl2fzFrvcjOXH3UyUR-AlFfa5BrdBrRTSjZ5SZh1nBKmFCPYeGkxM3TeNs7Q6aS1y-n3HkrV67TPY7OvKcfNIZ-3lTN0Nk2tTBONY0g1G9euh210aYQQG76QdGj6UohG-DgRXE6lZAgvNgnWzwnqVwm2aTVNP4BNobgIo4MXxnOCHFMqOcZEqGWshyyWaT_Wf4v-h8qeAGa1rLM</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2406254751</pqid></control><display><type>article</type><title>BlockNet: A Deep Neural Network for Block-Based Motion Estimation Using Representative Matching</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>DOAJ Directory of Open Access Journals</source><source>Web of Science - Science Citation Index Expanded - 2020&lt;img src="https://exlibris-pub.s3.amazonaws.com/fromwos-v2.jpg" /&gt;</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Lee, Junggi ; Kong, Kyeongbo ; Bae, Gyujin ; Song, Woo-Jin</creator><creatorcontrib>Lee, Junggi ; Kong, Kyeongbo ; Bae, Gyujin ; Song, Woo-Jin</creatorcontrib><description>Owing to the limitations of practical realizations, block-based motion is widely used as an alternative for pixel-based motion in video applications such as global motion estimation and frame rate up-conversion. We hereby present BlockNet, a compact but effective deep neural architecture for block-based motion estimation. First, BlockNet extracts rich features for a pair of input images. Then, it estimates coarse-to-fine block motion using a pyramidal structure. In each level, block-based motion is estimated using the proposed representative matching with a simple average operator. The experimental results show that BlockNet achieved a similar average end-point error with and without representative matching, whereas the proposed matching incurred 18% lower computational cost than full matching.</description><identifier>ISSN: 2073-8994</identifier><identifier>EISSN: 2073-8994</identifier><identifier>DOI: 10.3390/sym12050840</identifier><language>eng</language><publisher>BASEL: Mdpi</publisher><subject>Algorithms ; Applied research ; Artificial neural networks ; block matching ; block-based motion ; deep neural network ; Estimation theory ; Feature extraction ; Human locomotion ; Image processing ; Matching ; Methods ; motion estimation ; Motion simulation ; Multidisciplinary Sciences ; Neural networks ; representative matching ; Science &amp; Technology ; Science &amp; Technology - Other Topics ; Upconversion ; Video compression</subject><ispartof>Symmetry (Basel), 2020-05, Vol.12 (5), p.840, Article 840</ispartof><rights>COPYRIGHT 2020 MDPI AG</rights><rights>2020. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>true</woscitedreferencessubscribed><woscitedreferencescount>7</woscitedreferencescount><woscitedreferencesoriginalsourcerecordid>wos000540226400158</woscitedreferencesoriginalsourcerecordid><citedby>FETCH-LOGICAL-c403t-1dd176b17625a6e4be94cfdb893cb65f59fa864ccbfb343399482d905de5b5133</citedby><cites>FETCH-LOGICAL-c403t-1dd176b17625a6e4be94cfdb893cb65f59fa864ccbfb343399482d905de5b5133</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,782,786,866,2106,2118,27933,27934,28257</link.rule.ids></links><search><creatorcontrib>Lee, Junggi</creatorcontrib><creatorcontrib>Kong, Kyeongbo</creatorcontrib><creatorcontrib>Bae, Gyujin</creatorcontrib><creatorcontrib>Song, Woo-Jin</creatorcontrib><title>BlockNet: A Deep Neural Network for Block-Based Motion Estimation Using Representative Matching</title><title>Symmetry (Basel)</title><addtitle>SYMMETRY-BASEL</addtitle><description>Owing to the limitations of practical realizations, block-based motion is widely used as an alternative for pixel-based motion in video applications such as global motion estimation and frame rate up-conversion. We hereby present BlockNet, a compact but effective deep neural architecture for block-based motion estimation. First, BlockNet extracts rich features for a pair of input images. Then, it estimates coarse-to-fine block motion using a pyramidal structure. In each level, block-based motion is estimated using the proposed representative matching with a simple average operator. The experimental results show that BlockNet achieved a similar average end-point error with and without representative matching, whereas the proposed matching incurred 18% lower computational cost than full matching.</description><subject>Algorithms</subject><subject>Applied research</subject><subject>Artificial neural networks</subject><subject>block matching</subject><subject>block-based motion</subject><subject>deep neural network</subject><subject>Estimation theory</subject><subject>Feature extraction</subject><subject>Human locomotion</subject><subject>Image processing</subject><subject>Matching</subject><subject>Methods</subject><subject>motion estimation</subject><subject>Motion simulation</subject><subject>Multidisciplinary Sciences</subject><subject>Neural networks</subject><subject>representative matching</subject><subject>Science &amp; Technology</subject><subject>Science &amp; Technology - Other Topics</subject><subject>Upconversion</subject><subject>Video compression</subject><issn>2073-8994</issn><issn>2073-8994</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>AOWDO</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>DOA</sourceid><recordid>eNqNUV1rGzEQFKGFBjdP_QMHeSyX6Nu6vjlu2gaSFErzLPSxcuXYJ1eSG_Lvo_hKmsdKiF2GndkRg9AHgs8YG_B5edwSigVWHB-hY4rnrFfDwN-86t-hk1LWuB2BBZf4GOmLTXL3t1A_dYvuM8Cuu4V9NptW6kPK911IuTvM9BemgO9uUo1p7C5LjVtzaO9KHFfdD9hlKDDWBv6B7sZU96vh79HbYDYFTv7WGbr7cvlz-a2__v71arm47h3HrPbEezKXtj0qjARuYeAueKsG5qwUQQzBKMmds8Ey3r47cEX9gIUHYQVhbIauJl2fzFrvcjOXH3UyUR-AlFfa5BrdBrRTSjZ5SZh1nBKmFCPYeGkxM3TeNs7Q6aS1y-n3HkrV67TPY7OvKcfNIZ-3lTN0Nk2tTBONY0g1G9euh210aYQQG76QdGj6UohG-DgRXE6lZAgvNgnWzwnqVwm2aTVNP4BNobgIo4MXxnOCHFMqOcZEqGWshyyWaT_Wf4v-h8qeAGa1rLM</recordid><startdate>20200501</startdate><enddate>20200501</enddate><creator>Lee, Junggi</creator><creator>Kong, Kyeongbo</creator><creator>Bae, Gyujin</creator><creator>Song, Woo-Jin</creator><general>Mdpi</general><general>MDPI AG</general><scope>AOWDO</scope><scope>BLEPL</scope><scope>DTL</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SR</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>H8D</scope><scope>HCIFZ</scope><scope>JG9</scope><scope>JQ2</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>DOA</scope></search><sort><creationdate>20200501</creationdate><title>BlockNet: A Deep Neural Network for Block-Based Motion Estimation Using Representative Matching</title><author>Lee, Junggi ; Kong, Kyeongbo ; Bae, Gyujin ; Song, Woo-Jin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c403t-1dd176b17625a6e4be94cfdb893cb65f59fa864ccbfb343399482d905de5b5133</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Applied research</topic><topic>Artificial neural networks</topic><topic>block matching</topic><topic>block-based motion</topic><topic>deep neural network</topic><topic>Estimation theory</topic><topic>Feature extraction</topic><topic>Human locomotion</topic><topic>Image processing</topic><topic>Matching</topic><topic>Methods</topic><topic>motion estimation</topic><topic>Motion simulation</topic><topic>Multidisciplinary Sciences</topic><topic>Neural networks</topic><topic>representative matching</topic><topic>Science &amp; Technology</topic><topic>Science &amp; Technology - Other Topics</topic><topic>Upconversion</topic><topic>Video compression</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lee, Junggi</creatorcontrib><creatorcontrib>Kong, Kyeongbo</creatorcontrib><creatorcontrib>Bae, Gyujin</creatorcontrib><creatorcontrib>Song, Woo-Jin</creatorcontrib><collection>Web of Science - Science Citation Index Expanded - 2020</collection><collection>Web of Science Core Collection</collection><collection>Science Citation Index Expanded</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Aerospace Database</collection><collection>SciTech Premium Collection</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Symmetry (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lee, Junggi</au><au>Kong, Kyeongbo</au><au>Bae, Gyujin</au><au>Song, Woo-Jin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>BlockNet: A Deep Neural Network for Block-Based Motion Estimation Using Representative Matching</atitle><jtitle>Symmetry (Basel)</jtitle><stitle>SYMMETRY-BASEL</stitle><date>2020-05-01</date><risdate>2020</risdate><volume>12</volume><issue>5</issue><spage>840</spage><pages>840-</pages><artnum>840</artnum><issn>2073-8994</issn><eissn>2073-8994</eissn><abstract>Owing to the limitations of practical realizations, block-based motion is widely used as an alternative for pixel-based motion in video applications such as global motion estimation and frame rate up-conversion. We hereby present BlockNet, a compact but effective deep neural architecture for block-based motion estimation. First, BlockNet extracts rich features for a pair of input images. Then, it estimates coarse-to-fine block motion using a pyramidal structure. In each level, block-based motion is estimated using the proposed representative matching with a simple average operator. The experimental results show that BlockNet achieved a similar average end-point error with and without representative matching, whereas the proposed matching incurred 18% lower computational cost than full matching.</abstract><cop>BASEL</cop><pub>Mdpi</pub><doi>10.3390/sym12050840</doi><tpages>8</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2073-8994
ispartof Symmetry (Basel), 2020-05, Vol.12 (5), p.840, Article 840
issn 2073-8994
2073-8994
language eng
recordid cdi_proquest_journals_2406254751
source MDPI - Multidisciplinary Digital Publishing Institute; DOAJ Directory of Open Access Journals; Web of Science - Science Citation Index Expanded - 2020<img src="https://exlibris-pub.s3.amazonaws.com/fromwos-v2.jpg" />; EZB-FREE-00999 freely available EZB journals
subjects Algorithms
Applied research
Artificial neural networks
block matching
block-based motion
deep neural network
Estimation theory
Feature extraction
Human locomotion
Image processing
Matching
Methods
motion estimation
Motion simulation
Multidisciplinary Sciences
Neural networks
representative matching
Science & Technology
Science & Technology - Other Topics
Upconversion
Video compression
title BlockNet: A Deep Neural Network for Block-Based Motion Estimation Using Representative Matching
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-11-30T00%3A56%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=BlockNet:%20A%20Deep%20Neural%20Network%20for%20Block-Based%20Motion%20Estimation%20Using%20Representative%20Matching&rft.jtitle=Symmetry%20(Basel)&rft.au=Lee,%20Junggi&rft.date=2020-05-01&rft.volume=12&rft.issue=5&rft.spage=840&rft.pages=840-&rft.artnum=840&rft.issn=2073-8994&rft.eissn=2073-8994&rft_id=info:doi/10.3390/sym12050840&rft_dat=%3Cgale_proqu%3EA629831655%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2406254751&rft_id=info:pmid/&rft_galeid=A629831655&rft_doaj_id=oai_doaj_org_article_c886e94613bc421388310ad6b03a2789&rfr_iscdi=true