A Linear Dynamical System Framework for Salient Motion Detection
Detection of salient motion in a video involves determining which motion is attended to by the human visual system in the presence of background motion that consists of complex visuals that are constantly changing. Salient motion is marked by its predictability compared to the more complex unpredict...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2012-05, Vol.22 (5), p.683-692 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 692 |
---|---|
container_issue | 5 |
container_start_page | 683 |
container_title | IEEE transactions on circuits and systems for video technology |
container_volume | 22 |
creator | Gopalakrishnan, V. Rajan, D. Yiqun Hu |
description | Detection of salient motion in a video involves determining which motion is attended to by the human visual system in the presence of background motion that consists of complex visuals that are constantly changing. Salient motion is marked by its predictability compared to the more complex unpredictable motion of the background such as fluttering of leaves, ripples in water, dispersion of smoke, and others. We introduce a novel approach to detect salient motion based on the concept of "observability" from the output pixels, when the video sequence is represented as a linear dynamical system. The group of output pixels with maximum saliency is further used to model the holistic dynamics of the salient region. The pixel saliency map is bolstered by two region-based saliency maps, which are computed based on the similarity of dynamics of the different spatiotemporal patches in the video with the salient region dynamics, in a global as well as a local sense. The resulting algorithm is tested on a set of challenging sequences and compared to state-of-the-art methods to showcase its superior performance on grounds of its computational efficiency and ability to detect salient motion. |
doi_str_mv | 10.1109/TCSVT.2011.2177177 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_1022850107</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6086603</ieee_id><sourcerecordid>2651648031</sourcerecordid><originalsourceid>FETCH-LOGICAL-c358t-64fb30386c6df84626f7490b47a94704092bef9de45edf6a1a0c06ede6df496c3</originalsourceid><addsrcrecordid>eNpdkE1LAzEQhhdRsFb_gF4WRPCydZLN194srVWh4qHV65KmE0jdj5pskf57U1t6EAZmYJ53GJ4kuSYwIASKh_lo9jkfUCBkQImUsU6SHuFcZZQCP40zcJIpSvh5chHCCoAwxWQveRymU9eg9ul42-jaGV2ls23osE4nXtf40_qv1LY-nenKYdOlb23n2iYdY4dmN10mZ1ZXAa8OvZ98TJ7mo5ds-v78OhpOM5Nz1WWC2UUOuRJGLK1iggorWQELJnXBJDAo6AJtsUTGcWmFJhoMCFxixFkhTN5P7vd317793mDoytoFg1WlG2w3oSRAqeJAQEb09h-6aje-id9FKuqinCiIFN1TxrcheLTl2rta-22Eyp3U8k9quZNaHqTG0N3htA5RlfW6MS4ck5QrQWROInez5xwiHtcClBCQ5789NX5b</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1011025180</pqid></control><display><type>article</type><title>A Linear Dynamical System Framework for Salient Motion Detection</title><source>IEEE Electronic Library (IEL)</source><creator>Gopalakrishnan, V. ; Rajan, D. ; Yiqun Hu</creator><creatorcontrib>Gopalakrishnan, V. ; Rajan, D. ; Yiqun Hu</creatorcontrib><description>Detection of salient motion in a video involves determining which motion is attended to by the human visual system in the presence of background motion that consists of complex visuals that are constantly changing. Salient motion is marked by its predictability compared to the more complex unpredictable motion of the background such as fluttering of leaves, ripples in water, dispersion of smoke, and others. We introduce a novel approach to detect salient motion based on the concept of "observability" from the output pixels, when the video sequence is represented as a linear dynamical system. The group of output pixels with maximum saliency is further used to model the holistic dynamics of the salient region. The pixel saliency map is bolstered by two region-based saliency maps, which are computed based on the similarity of dynamics of the different spatiotemporal patches in the video with the salient region dynamics, in a global as well as a local sense. The resulting algorithm is tested on a set of challenging sequences and compared to state-of-the-art methods to showcase its superior performance on grounds of its computational efficiency and ability to detect salient motion.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2011.2177177</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York, NY: IEEE</publisher><subject>Algorithms ; Applied sciences ; Computational modeling ; Covariance matrix ; Detection, estimation, filtering, equalization, prediction ; Dynamical systems ; Dynamics ; Electronics ; Exact sciences and technology ; Image processing ; Information, signal and communications theory ; Leaves ; Linear dynamical systems ; Mathematical model ; Observability ; Pixels ; Ripples ; Signal and communications theory ; Signal processing ; Signal, noise ; Smoke ; Telecommunications and information theory ; Testing, measurement, noise and reliability ; Vectors ; video saliency ; Video sequences ; Visual</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2012-05, Vol.22 (5), p.683-692</ispartof><rights>2015 INIST-CNRS</rights><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) May 2012</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c358t-64fb30386c6df84626f7490b47a94704092bef9de45edf6a1a0c06ede6df496c3</citedby><cites>FETCH-LOGICAL-c358t-64fb30386c6df84626f7490b47a94704092bef9de45edf6a1a0c06ede6df496c3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6086603$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6086603$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=25861731$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Gopalakrishnan, V.</creatorcontrib><creatorcontrib>Rajan, D.</creatorcontrib><creatorcontrib>Yiqun Hu</creatorcontrib><title>A Linear Dynamical System Framework for Salient Motion Detection</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Detection of salient motion in a video involves determining which motion is attended to by the human visual system in the presence of background motion that consists of complex visuals that are constantly changing. Salient motion is marked by its predictability compared to the more complex unpredictable motion of the background such as fluttering of leaves, ripples in water, dispersion of smoke, and others. We introduce a novel approach to detect salient motion based on the concept of "observability" from the output pixels, when the video sequence is represented as a linear dynamical system. The group of output pixels with maximum saliency is further used to model the holistic dynamics of the salient region. The pixel saliency map is bolstered by two region-based saliency maps, which are computed based on the similarity of dynamics of the different spatiotemporal patches in the video with the salient region dynamics, in a global as well as a local sense. The resulting algorithm is tested on a set of challenging sequences and compared to state-of-the-art methods to showcase its superior performance on grounds of its computational efficiency and ability to detect salient motion.</description><subject>Algorithms</subject><subject>Applied sciences</subject><subject>Computational modeling</subject><subject>Covariance matrix</subject><subject>Detection, estimation, filtering, equalization, prediction</subject><subject>Dynamical systems</subject><subject>Dynamics</subject><subject>Electronics</subject><subject>Exact sciences and technology</subject><subject>Image processing</subject><subject>Information, signal and communications theory</subject><subject>Leaves</subject><subject>Linear dynamical systems</subject><subject>Mathematical model</subject><subject>Observability</subject><subject>Pixels</subject><subject>Ripples</subject><subject>Signal and communications theory</subject><subject>Signal processing</subject><subject>Signal, noise</subject><subject>Smoke</subject><subject>Telecommunications and information theory</subject><subject>Testing, measurement, noise and reliability</subject><subject>Vectors</subject><subject>video saliency</subject><subject>Video sequences</subject><subject>Visual</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2012</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1LAzEQhhdRsFb_gF4WRPCydZLN194srVWh4qHV65KmE0jdj5pskf57U1t6EAZmYJ53GJ4kuSYwIASKh_lo9jkfUCBkQImUsU6SHuFcZZQCP40zcJIpSvh5chHCCoAwxWQveRymU9eg9ul42-jaGV2ls23osE4nXtf40_qv1LY-nenKYdOlb23n2iYdY4dmN10mZ1ZXAa8OvZ98TJ7mo5ds-v78OhpOM5Nz1WWC2UUOuRJGLK1iggorWQELJnXBJDAo6AJtsUTGcWmFJhoMCFxixFkhTN5P7vd317793mDoytoFg1WlG2w3oSRAqeJAQEb09h-6aje-id9FKuqinCiIFN1TxrcheLTl2rta-22Eyp3U8k9quZNaHqTG0N3htA5RlfW6MS4ck5QrQWROInez5xwiHtcClBCQ5789NX5b</recordid><startdate>20120501</startdate><enddate>20120501</enddate><creator>Gopalakrishnan, V.</creator><creator>Rajan, D.</creator><creator>Yiqun Hu</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>F28</scope><scope>FR3</scope></search><sort><creationdate>20120501</creationdate><title>A Linear Dynamical System Framework for Salient Motion Detection</title><author>Gopalakrishnan, V. ; Rajan, D. ; Yiqun Hu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c358t-64fb30386c6df84626f7490b47a94704092bef9de45edf6a1a0c06ede6df496c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2012</creationdate><topic>Algorithms</topic><topic>Applied sciences</topic><topic>Computational modeling</topic><topic>Covariance matrix</topic><topic>Detection, estimation, filtering, equalization, prediction</topic><topic>Dynamical systems</topic><topic>Dynamics</topic><topic>Electronics</topic><topic>Exact sciences and technology</topic><topic>Image processing</topic><topic>Information, signal and communications theory</topic><topic>Leaves</topic><topic>Linear dynamical systems</topic><topic>Mathematical model</topic><topic>Observability</topic><topic>Pixels</topic><topic>Ripples</topic><topic>Signal and communications theory</topic><topic>Signal processing</topic><topic>Signal, noise</topic><topic>Smoke</topic><topic>Telecommunications and information theory</topic><topic>Testing, measurement, noise and reliability</topic><topic>Vectors</topic><topic>video saliency</topic><topic>Video sequences</topic><topic>Visual</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gopalakrishnan, V.</creatorcontrib><creatorcontrib>Rajan, D.</creatorcontrib><creatorcontrib>Yiqun Hu</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gopalakrishnan, V.</au><au>Rajan, D.</au><au>Yiqun Hu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Linear Dynamical System Framework for Salient Motion Detection</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2012-05-01</date><risdate>2012</risdate><volume>22</volume><issue>5</issue><spage>683</spage><epage>692</epage><pages>683-692</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Detection of salient motion in a video involves determining which motion is attended to by the human visual system in the presence of background motion that consists of complex visuals that are constantly changing. Salient motion is marked by its predictability compared to the more complex unpredictable motion of the background such as fluttering of leaves, ripples in water, dispersion of smoke, and others. We introduce a novel approach to detect salient motion based on the concept of "observability" from the output pixels, when the video sequence is represented as a linear dynamical system. The group of output pixels with maximum saliency is further used to model the holistic dynamics of the salient region. The pixel saliency map is bolstered by two region-based saliency maps, which are computed based on the similarity of dynamics of the different spatiotemporal patches in the video with the salient region dynamics, in a global as well as a local sense. The resulting algorithm is tested on a set of challenging sequences and compared to state-of-the-art methods to showcase its superior performance on grounds of its computational efficiency and ability to detect salient motion.</abstract><cop>New York, NY</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2011.2177177</doi><tpages>10</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1051-8215 |
ispartof | IEEE transactions on circuits and systems for video technology, 2012-05, Vol.22 (5), p.683-692 |
issn | 1051-8215 1558-2205 |
language | eng |
recordid | cdi_proquest_miscellaneous_1022850107 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Applied sciences Computational modeling Covariance matrix Detection, estimation, filtering, equalization, prediction Dynamical systems Dynamics Electronics Exact sciences and technology Image processing Information, signal and communications theory Leaves Linear dynamical systems Mathematical model Observability Pixels Ripples Signal and communications theory Signal processing Signal, noise Smoke Telecommunications and information theory Testing, measurement, noise and reliability Vectors video saliency Video sequences Visual |
title | A Linear Dynamical System Framework for Salient Motion Detection |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T23%3A50%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Linear%20Dynamical%20System%20Framework%20for%20Salient%20Motion%20Detection&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Gopalakrishnan,%20V.&rft.date=2012-05-01&rft.volume=22&rft.issue=5&rft.spage=683&rft.epage=692&rft.pages=683-692&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2011.2177177&rft_dat=%3Cproquest_RIE%3E2651648031%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1011025180&rft_id=info:pmid/&rft_ieee_id=6086603&rfr_iscdi=true |