IMU-Assisted Online Video Background Identification

Distinguishing between dynamic foreground objects and a mostly static background is a fundamental problem in many computer vision and computer graphics tasks. This paper presents a novel online video background identification method with the assistance of inertial measurement unit (IMU). Based on th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2022, Vol.31, p.4336-4351
Hauptverfasser: Rong, Jian-Xiang, Zhang, Lei, Huang, Hua, Zhang, Fang-Lue
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4351
container_issue
container_start_page 4336
container_title IEEE transactions on image processing
container_volume 31
creator Rong, Jian-Xiang
Zhang, Lei
Huang, Hua
Zhang, Fang-Lue
description Distinguishing between dynamic foreground objects and a mostly static background is a fundamental problem in many computer vision and computer graphics tasks. This paper presents a novel online video background identification method with the assistance of inertial measurement unit (IMU). Based on the fact that the background motion of a video essentially reflects the 3D camera motion, we leverage IMU data to realize a robust camera motion estimation for identifying background feature points by only investigating a few historical frames. We observe that the displacement of the 2D projection of a scene point caused by camera rotation is depth-invariant, and the rotation estimation by using IMU data can be quite accurate. We thus propose to analyze 2D feature points by decomposing the 2D motion into two components: rotation projection and translation projection. In our method, after establishing the 3D camera rotations, we generate the depth-relevant 2D feature point movement induced by the camera 3D translation. Then, by examining the disparity between inter-frame offset and the projection of estimated 3D camera motion, we can identify the background feature points. In the experiments, our online method is able to run at 30FPS with only 1 frame latency and outperforms state-of-the-art background identification and other relevant methods. Our method directly leads to a better camera motion estimation, which is beneficial to many applications like online video stabilization, SLAM, image stitching, etc .
doi_str_mv 10.1109/TIP.2022.3183442
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2681953603</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9802831</ieee_id><sourcerecordid>2681953603</sourcerecordid><originalsourceid>FETCH-LOGICAL-c277t-361b24dbd095a23247e5412e65445722f584b6647aa05dd93060cf69962756b3</originalsourceid><addsrcrecordid>eNpdkDtPwzAURi0EoqWwI7FEYmFJuX7HY6l4RCoqQ2C1nNhBLmlS4mTg3-OqFQPTvcP57uMgdI1hjjGo-yJ_mxMgZE5xRhkjJ2iKFcMpACOnsQcuU4mZmqCLEDYAmHEsztGEckmkzOgU0fz1PV2E4MPgbLJuG9-65MNb1yUPpvr67LuxtUluXTv42ldm8F17ic5q0wR3dawzVDw9FsuXdLV-zpeLVVrF4UNKBS4Js6UFxQ2hhEnHGSZOcMbiflLzjJVCMGkMcGsVBQFVLZQSRHJR0hm6O4zd9d336MKgtz5UrmlM67oxaCKkIvuvRURv_6GbbuzbeFykMqw4FUAjBQeq6rsQelfrXe-3pv_RGPTep44-9d6nPvqMkZtDxDvn_nCVAckopr_pT2t3</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2681953603</pqid></control><display><type>article</type><title>IMU-Assisted Online Video Background Identification</title><source>IEEE Electronic Library (IEL)</source><creator>Rong, Jian-Xiang ; Zhang, Lei ; Huang, Hua ; Zhang, Fang-Lue</creator><creatorcontrib>Rong, Jian-Xiang ; Zhang, Lei ; Huang, Hua ; Zhang, Fang-Lue</creatorcontrib><description>Distinguishing between dynamic foreground objects and a mostly static background is a fundamental problem in many computer vision and computer graphics tasks. This paper presents a novel online video background identification method with the assistance of inertial measurement unit (IMU). Based on the fact that the background motion of a video essentially reflects the 3D camera motion, we leverage IMU data to realize a robust camera motion estimation for identifying background feature points by only investigating a few historical frames. We observe that the displacement of the 2D projection of a scene point caused by camera rotation is depth-invariant, and the rotation estimation by using IMU data can be quite accurate. We thus propose to analyze 2D feature points by decomposing the 2D motion into two components: rotation projection and translation projection. In our method, after establishing the 3D camera rotations, we generate the depth-relevant 2D feature point movement induced by the camera 3D translation. Then, by examining the disparity between inter-frame offset and the projection of estimated 3D camera motion, we can identify the background feature points. In the experiments, our online method is able to run at 30FPS with only 1 frame latency and outperforms state-of-the-art background identification and other relevant methods. Our method directly leads to a better camera motion estimation, which is beneficial to many applications like online video stabilization, SLAM, image stitching, etc .</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2022.3183442</identifier><identifier>PMID: 35727783</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Background identification ; camera motion ; Cameras ; Computer graphics ; Computer vision ; Identification methods ; inertial measurement unit ; Inertial platforms ; Motion segmentation ; Motion simulation ; Object recognition ; offset ; online ; Projection ; Rotation ; Stitching ; Streaming media ; Three dimensional motion ; Three-dimensional displays ; Trajectory ; Two dimensional analysis</subject><ispartof>IEEE transactions on image processing, 2022, Vol.31, p.4336-4351</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c277t-361b24dbd095a23247e5412e65445722f584b6647aa05dd93060cf69962756b3</cites><orcidid>0000-0002-2286-0314 ; 0000-0003-2587-1702 ; 0000-0001-5921-217X ; 0000-0002-8728-8726</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9802831$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4010,27900,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9802831$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Rong, Jian-Xiang</creatorcontrib><creatorcontrib>Zhang, Lei</creatorcontrib><creatorcontrib>Huang, Hua</creatorcontrib><creatorcontrib>Zhang, Fang-Lue</creatorcontrib><title>IMU-Assisted Online Video Background Identification</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><description>Distinguishing between dynamic foreground objects and a mostly static background is a fundamental problem in many computer vision and computer graphics tasks. This paper presents a novel online video background identification method with the assistance of inertial measurement unit (IMU). Based on the fact that the background motion of a video essentially reflects the 3D camera motion, we leverage IMU data to realize a robust camera motion estimation for identifying background feature points by only investigating a few historical frames. We observe that the displacement of the 2D projection of a scene point caused by camera rotation is depth-invariant, and the rotation estimation by using IMU data can be quite accurate. We thus propose to analyze 2D feature points by decomposing the 2D motion into two components: rotation projection and translation projection. In our method, after establishing the 3D camera rotations, we generate the depth-relevant 2D feature point movement induced by the camera 3D translation. Then, by examining the disparity between inter-frame offset and the projection of estimated 3D camera motion, we can identify the background feature points. In the experiments, our online method is able to run at 30FPS with only 1 frame latency and outperforms state-of-the-art background identification and other relevant methods. Our method directly leads to a better camera motion estimation, which is beneficial to many applications like online video stabilization, SLAM, image stitching, etc .</description><subject>Background identification</subject><subject>camera motion</subject><subject>Cameras</subject><subject>Computer graphics</subject><subject>Computer vision</subject><subject>Identification methods</subject><subject>inertial measurement unit</subject><subject>Inertial platforms</subject><subject>Motion segmentation</subject><subject>Motion simulation</subject><subject>Object recognition</subject><subject>offset</subject><subject>online</subject><subject>Projection</subject><subject>Rotation</subject><subject>Stitching</subject><subject>Streaming media</subject><subject>Three dimensional motion</subject><subject>Three-dimensional displays</subject><subject>Trajectory</subject><subject>Two dimensional analysis</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkDtPwzAURi0EoqWwI7FEYmFJuX7HY6l4RCoqQ2C1nNhBLmlS4mTg3-OqFQPTvcP57uMgdI1hjjGo-yJ_mxMgZE5xRhkjJ2iKFcMpACOnsQcuU4mZmqCLEDYAmHEsztGEckmkzOgU0fz1PV2E4MPgbLJuG9-65MNb1yUPpvr67LuxtUluXTv42ldm8F17ic5q0wR3dawzVDw9FsuXdLV-zpeLVVrF4UNKBS4Js6UFxQ2hhEnHGSZOcMbiflLzjJVCMGkMcGsVBQFVLZQSRHJR0hm6O4zd9d336MKgtz5UrmlM67oxaCKkIvuvRURv_6GbbuzbeFykMqw4FUAjBQeq6rsQelfrXe-3pv_RGPTep44-9d6nPvqMkZtDxDvn_nCVAckopr_pT2t3</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Rong, Jian-Xiang</creator><creator>Zhang, Lei</creator><creator>Huang, Hua</creator><creator>Zhang, Fang-Lue</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-2286-0314</orcidid><orcidid>https://orcid.org/0000-0003-2587-1702</orcidid><orcidid>https://orcid.org/0000-0001-5921-217X</orcidid><orcidid>https://orcid.org/0000-0002-8728-8726</orcidid></search><sort><creationdate>2022</creationdate><title>IMU-Assisted Online Video Background Identification</title><author>Rong, Jian-Xiang ; Zhang, Lei ; Huang, Hua ; Zhang, Fang-Lue</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c277t-361b24dbd095a23247e5412e65445722f584b6647aa05dd93060cf69962756b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Background identification</topic><topic>camera motion</topic><topic>Cameras</topic><topic>Computer graphics</topic><topic>Computer vision</topic><topic>Identification methods</topic><topic>inertial measurement unit</topic><topic>Inertial platforms</topic><topic>Motion segmentation</topic><topic>Motion simulation</topic><topic>Object recognition</topic><topic>offset</topic><topic>online</topic><topic>Projection</topic><topic>Rotation</topic><topic>Stitching</topic><topic>Streaming media</topic><topic>Three dimensional motion</topic><topic>Three-dimensional displays</topic><topic>Trajectory</topic><topic>Two dimensional analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rong, Jian-Xiang</creatorcontrib><creatorcontrib>Zhang, Lei</creatorcontrib><creatorcontrib>Huang, Hua</creatorcontrib><creatorcontrib>Zhang, Fang-Lue</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Rong, Jian-Xiang</au><au>Zhang, Lei</au><au>Huang, Hua</au><au>Zhang, Fang-Lue</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>IMU-Assisted Online Video Background Identification</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><date>2022</date><risdate>2022</risdate><volume>31</volume><spage>4336</spage><epage>4351</epage><pages>4336-4351</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Distinguishing between dynamic foreground objects and a mostly static background is a fundamental problem in many computer vision and computer graphics tasks. This paper presents a novel online video background identification method with the assistance of inertial measurement unit (IMU). Based on the fact that the background motion of a video essentially reflects the 3D camera motion, we leverage IMU data to realize a robust camera motion estimation for identifying background feature points by only investigating a few historical frames. We observe that the displacement of the 2D projection of a scene point caused by camera rotation is depth-invariant, and the rotation estimation by using IMU data can be quite accurate. We thus propose to analyze 2D feature points by decomposing the 2D motion into two components: rotation projection and translation projection. In our method, after establishing the 3D camera rotations, we generate the depth-relevant 2D feature point movement induced by the camera 3D translation. Then, by examining the disparity between inter-frame offset and the projection of estimated 3D camera motion, we can identify the background feature points. In the experiments, our online method is able to run at 30FPS with only 1 frame latency and outperforms state-of-the-art background identification and other relevant methods. Our method directly leads to a better camera motion estimation, which is beneficial to many applications like online video stabilization, SLAM, image stitching, etc .</abstract><cop>New York</cop><pub>IEEE</pub><pmid>35727783</pmid><doi>10.1109/TIP.2022.3183442</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-2286-0314</orcidid><orcidid>https://orcid.org/0000-0003-2587-1702</orcidid><orcidid>https://orcid.org/0000-0001-5921-217X</orcidid><orcidid>https://orcid.org/0000-0002-8728-8726</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2022, Vol.31, p.4336-4351
issn 1057-7149
1941-0042
language eng
recordid cdi_proquest_journals_2681953603
source IEEE Electronic Library (IEL)
subjects Background identification
camera motion
Cameras
Computer graphics
Computer vision
Identification methods
inertial measurement unit
Inertial platforms
Motion segmentation
Motion simulation
Object recognition
offset
online
Projection
Rotation
Stitching
Streaming media
Three dimensional motion
Three-dimensional displays
Trajectory
Two dimensional analysis
title IMU-Assisted Online Video Background Identification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T13%3A23%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=IMU-Assisted%20Online%20Video%20Background%20Identification&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Rong,%20Jian-Xiang&rft.date=2022&rft.volume=31&rft.spage=4336&rft.epage=4351&rft.pages=4336-4351&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2022.3183442&rft_dat=%3Cproquest_RIE%3E2681953603%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2681953603&rft_id=info:pmid/35727783&rft_ieee_id=9802831&rfr_iscdi=true