Predictive Saliency Maps for Surveillance Videos

When viewing video sequences, the human visual system (HVS) tends to focus on the active objects. These are perceived as the most salient regions in the scene. Additionally, human observers tend to predict the future positions of moving objects in a dynamic scene and to direct their gaze to these po...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Guraya, Fahad Fazal Elahi, Cheikh, Faouzi Alaya, Tremeau, Alain, Yubing Tong, Konik, Hubert
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 513
container_issue
container_start_page 508
container_title
container_volume
creator Guraya, Fahad Fazal Elahi
Cheikh, Faouzi Alaya
Tremeau, Alain
Yubing Tong
Konik, Hubert
description When viewing video sequences, the human visual system (HVS) tends to focus on the active objects. These are perceived as the most salient regions in the scene. Additionally, human observers tend to predict the future positions of moving objects in a dynamic scene and to direct their gaze to these positions. In this paper we propose a saliency detection model that accounts for the motion in the sequence and predicts the positions of the salient objects in future frames. This is a novel technique for attention models that we call Predictive Saliency Map (PSM). PSM improves the consistency of the estimated saliency maps for video sequences. PSM uses both static information provided by static saliency maps (SSM) and motion vectors to predict future salient regions in the next frame. In this paper we focus only on surveillance videos therefore, in addition to low-level features such as intensity, color and orientation we consider high-level features such as faces as salient regions that attract naturally viewers attention. Saliency maps computed based on these static features are combined with motion saliency maps to account for saliency created by the activity in the scene. The predicted saliency map is computed using previous saliency maps and motion information. The PSMs are compared with the experimentally obtained gaze maps and saliency maps obtained using approaches from the literature. The experimental results show that our enhanced model yields higher ability to predict eye fixations in surveillance videos.
doi_str_mv 10.1109/DCABES.2010.160
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_5571574</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5571574</ieee_id><sourcerecordid>5571574</sourcerecordid><originalsourceid>FETCH-LOGICAL-i216t-a8b30613f2056814aea4d7737e2b0c3abd7d33515b8763bfbcfe85e77866a75f3</originalsourceid><addsrcrecordid>eNotjE1LAzEQQCMiaGvPHrzsH9g6yWQy2WNd6wdUFFa9lmR3AoG1Lbu10H-voqcH78FT6krDXGuobu7qxe2ymRv4FQ5O1ATYVWR_IpyqibbGWiaszLmajWOOoMmjQbIXCl4H6XK7zwcpmtBn2bTH4jnsxiJth6L5Gg6S-z5sWik-cifb8VKdpdCPMvvnVL3fL9_qx3L18vBUL1ZlNtrty-AjgtOYDJDz2gYJtmNGFhOhxRA77hBJU_TsMKbYJvEkzN65wJRwqq7_vllE1rshf4bhuCZiTWzxG3HzQ3M</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Predictive Saliency Maps for Surveillance Videos</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Guraya, Fahad Fazal Elahi ; Cheikh, Faouzi Alaya ; Tremeau, Alain ; Yubing Tong ; Konik, Hubert</creator><creatorcontrib>Guraya, Fahad Fazal Elahi ; Cheikh, Faouzi Alaya ; Tremeau, Alain ; Yubing Tong ; Konik, Hubert</creatorcontrib><description>When viewing video sequences, the human visual system (HVS) tends to focus on the active objects. These are perceived as the most salient regions in the scene. Additionally, human observers tend to predict the future positions of moving objects in a dynamic scene and to direct their gaze to these positions. In this paper we propose a saliency detection model that accounts for the motion in the sequence and predicts the positions of the salient objects in future frames. This is a novel technique for attention models that we call Predictive Saliency Map (PSM). PSM improves the consistency of the estimated saliency maps for video sequences. PSM uses both static information provided by static saliency maps (SSM) and motion vectors to predict future salient regions in the next frame. In this paper we focus only on surveillance videos therefore, in addition to low-level features such as intensity, color and orientation we consider high-level features such as faces as salient regions that attract naturally viewers attention. Saliency maps computed based on these static features are combined with motion saliency maps to account for saliency created by the activity in the scene. The predicted saliency map is computed using previous saliency maps and motion information. The PSMs are compared with the experimentally obtained gaze maps and saliency maps obtained using approaches from the literature. The experimental results show that our enhanced model yields higher ability to predict eye fixations in surveillance videos.</description><identifier>ISBN: 1424475392</identifier><identifier>ISBN: 9781424475391</identifier><identifier>EISBN: 0769541100</identifier><identifier>EISBN: 1424475406</identifier><identifier>EISBN: 9781424475407</identifier><identifier>EISBN: 9780769541105</identifier><identifier>DOI: 10.1109/DCABES.2010.160</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computational modeling ; Humans ; Mathematical model ; Predictive models ; Surveillance ; Video sequences ; Videos</subject><ispartof>2010 Ninth International Symposium on Distributed Computing and Applications to Business, Engineering and Science, 2010, p.508-513</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5571574$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,777,781,786,787,2052,27906,54901</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5571574$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Guraya, Fahad Fazal Elahi</creatorcontrib><creatorcontrib>Cheikh, Faouzi Alaya</creatorcontrib><creatorcontrib>Tremeau, Alain</creatorcontrib><creatorcontrib>Yubing Tong</creatorcontrib><creatorcontrib>Konik, Hubert</creatorcontrib><title>Predictive Saliency Maps for Surveillance Videos</title><title>2010 Ninth International Symposium on Distributed Computing and Applications to Business, Engineering and Science</title><addtitle>DCABES</addtitle><description>When viewing video sequences, the human visual system (HVS) tends to focus on the active objects. These are perceived as the most salient regions in the scene. Additionally, human observers tend to predict the future positions of moving objects in a dynamic scene and to direct their gaze to these positions. In this paper we propose a saliency detection model that accounts for the motion in the sequence and predicts the positions of the salient objects in future frames. This is a novel technique for attention models that we call Predictive Saliency Map (PSM). PSM improves the consistency of the estimated saliency maps for video sequences. PSM uses both static information provided by static saliency maps (SSM) and motion vectors to predict future salient regions in the next frame. In this paper we focus only on surveillance videos therefore, in addition to low-level features such as intensity, color and orientation we consider high-level features such as faces as salient regions that attract naturally viewers attention. Saliency maps computed based on these static features are combined with motion saliency maps to account for saliency created by the activity in the scene. The predicted saliency map is computed using previous saliency maps and motion information. The PSMs are compared with the experimentally obtained gaze maps and saliency maps obtained using approaches from the literature. The experimental results show that our enhanced model yields higher ability to predict eye fixations in surveillance videos.</description><subject>Computational modeling</subject><subject>Humans</subject><subject>Mathematical model</subject><subject>Predictive models</subject><subject>Surveillance</subject><subject>Video sequences</subject><subject>Videos</subject><isbn>1424475392</isbn><isbn>9781424475391</isbn><isbn>0769541100</isbn><isbn>1424475406</isbn><isbn>9781424475407</isbn><isbn>9780769541105</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2010</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotjE1LAzEQQCMiaGvPHrzsH9g6yWQy2WNd6wdUFFa9lmR3AoG1Lbu10H-voqcH78FT6krDXGuobu7qxe2ymRv4FQ5O1ATYVWR_IpyqibbGWiaszLmajWOOoMmjQbIXCl4H6XK7zwcpmtBn2bTH4jnsxiJth6L5Gg6S-z5sWik-cifb8VKdpdCPMvvnVL3fL9_qx3L18vBUL1ZlNtrty-AjgtOYDJDz2gYJtmNGFhOhxRA77hBJU_TsMKbYJvEkzN65wJRwqq7_vllE1rshf4bhuCZiTWzxG3HzQ3M</recordid><startdate>201008</startdate><enddate>201008</enddate><creator>Guraya, Fahad Fazal Elahi</creator><creator>Cheikh, Faouzi Alaya</creator><creator>Tremeau, Alain</creator><creator>Yubing Tong</creator><creator>Konik, Hubert</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>201008</creationdate><title>Predictive Saliency Maps for Surveillance Videos</title><author>Guraya, Fahad Fazal Elahi ; Cheikh, Faouzi Alaya ; Tremeau, Alain ; Yubing Tong ; Konik, Hubert</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i216t-a8b30613f2056814aea4d7737e2b0c3abd7d33515b8763bfbcfe85e77866a75f3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2010</creationdate><topic>Computational modeling</topic><topic>Humans</topic><topic>Mathematical model</topic><topic>Predictive models</topic><topic>Surveillance</topic><topic>Video sequences</topic><topic>Videos</topic><toplevel>online_resources</toplevel><creatorcontrib>Guraya, Fahad Fazal Elahi</creatorcontrib><creatorcontrib>Cheikh, Faouzi Alaya</creatorcontrib><creatorcontrib>Tremeau, Alain</creatorcontrib><creatorcontrib>Yubing Tong</creatorcontrib><creatorcontrib>Konik, Hubert</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Guraya, Fahad Fazal Elahi</au><au>Cheikh, Faouzi Alaya</au><au>Tremeau, Alain</au><au>Yubing Tong</au><au>Konik, Hubert</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Predictive Saliency Maps for Surveillance Videos</atitle><btitle>2010 Ninth International Symposium on Distributed Computing and Applications to Business, Engineering and Science</btitle><stitle>DCABES</stitle><date>2010-08</date><risdate>2010</risdate><spage>508</spage><epage>513</epage><pages>508-513</pages><isbn>1424475392</isbn><isbn>9781424475391</isbn><eisbn>0769541100</eisbn><eisbn>1424475406</eisbn><eisbn>9781424475407</eisbn><eisbn>9780769541105</eisbn><abstract>When viewing video sequences, the human visual system (HVS) tends to focus on the active objects. These are perceived as the most salient regions in the scene. Additionally, human observers tend to predict the future positions of moving objects in a dynamic scene and to direct their gaze to these positions. In this paper we propose a saliency detection model that accounts for the motion in the sequence and predicts the positions of the salient objects in future frames. This is a novel technique for attention models that we call Predictive Saliency Map (PSM). PSM improves the consistency of the estimated saliency maps for video sequences. PSM uses both static information provided by static saliency maps (SSM) and motion vectors to predict future salient regions in the next frame. In this paper we focus only on surveillance videos therefore, in addition to low-level features such as intensity, color and orientation we consider high-level features such as faces as salient regions that attract naturally viewers attention. Saliency maps computed based on these static features are combined with motion saliency maps to account for saliency created by the activity in the scene. The predicted saliency map is computed using previous saliency maps and motion information. The PSMs are compared with the experimentally obtained gaze maps and saliency maps obtained using approaches from the literature. The experimental results show that our enhanced model yields higher ability to predict eye fixations in surveillance videos.</abstract><pub>IEEE</pub><doi>10.1109/DCABES.2010.160</doi><tpages>6</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISBN: 1424475392
ispartof 2010 Ninth International Symposium on Distributed Computing and Applications to Business, Engineering and Science, 2010, p.508-513
issn
language eng
recordid cdi_ieee_primary_5571574
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Computational modeling
Humans
Mathematical model
Predictive models
Surveillance
Video sequences
Videos
title Predictive Saliency Maps for Surveillance Videos
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T07%3A30%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Predictive%20Saliency%20Maps%20for%20Surveillance%20Videos&rft.btitle=2010%20Ninth%20International%20Symposium%20on%20Distributed%20Computing%20and%20Applications%20to%20Business,%20Engineering%20and%20Science&rft.au=Guraya,%20Fahad%20Fazal%20Elahi&rft.date=2010-08&rft.spage=508&rft.epage=513&rft.pages=508-513&rft.isbn=1424475392&rft.isbn_list=9781424475391&rft_id=info:doi/10.1109/DCABES.2010.160&rft_dat=%3Cieee_6IE%3E5571574%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=0769541100&rft.eisbn_list=1424475406&rft.eisbn_list=9781424475407&rft.eisbn_list=9780769541105&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=5571574&rfr_iscdi=true