VOE: A new sparsity-based camera network placement framework

In this paper, we propose a stepwise sparsity-based framework for camera network placement. Unlike most previous methods which are developed for specific tasks, our approach is universal and can generalize well for different application scenarios. There are three steps in our approach: visibility an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) 2016-07, Vol.197, p.184-194
Hauptverfasser: Fu, Yi-Ge, Zhou, Jie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 194
container_issue
container_start_page 184
container_title Neurocomputing (Amsterdam)
container_volume 197
creator Fu, Yi-Ge
Zhou, Jie
description In this paper, we propose a stepwise sparsity-based framework for camera network placement. Unlike most previous methods which are developed for specific tasks, our approach is universal and can generalize well for different application scenarios. There are three steps in our approach: visibility analysis, optimization and evaluation (VOE), which are employed sequentially and iteratively. First, we use a cascaded visibility filter model to construct a visibility matrix, where each column describes the appearance representation of the surveillance area. Then, we formulate camera network layout as a sparse representation problem, and employ an l1-optimization algorithm to obtain a feasible solution. Our framework is general enough and applicable to various objectives in practical applications. Experiment results are presented to show the effectiveness and efficiency of the proposed framework.
doi_str_mv 10.1016/j.neucom.2016.02.065
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1816087775</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0925231216003660</els_id><sourcerecordid>1816087775</sourcerecordid><originalsourceid>FETCH-LOGICAL-c288t-57ad4095757b3c0a65d65d67a66f6a08d62541098c70d11f788df62745606ee93</originalsourceid><addsrcrecordid>eNp9kFtLAzEQhYMoWKv_wId99GXXSbq5rIhQSr1AoS_qa0iTWUjdm8nW0n_vLuuzMDDMnDMH5iPklkJGgYr7fdbgwbZ1xoYpA5aB4GdkRpVkqWJKnJMZFIynbEHZJbmKcQ9AJWXFjDx-btcPyTJp8JjEzoTo-1O6MxFdYk2NwQxKf2zDV9JVxmKNTZ-UYVDG3TW5KE0V8eavz8nH8_p99Zputi9vq-UmtUypPuXSuBwKLrncLSwYwd1Y0ghRCgPKCcZzCoWyEhylpVTKlYLJnAsQiMViTu6m3C603weMva59tFhVpsH2EDVVVICSUvLBmk9WG9oYA5a6C7424aQp6BGW3usJlh5haWB6gDWcPU1nOLzx4zHoaD02Fp0PaHvtWv9_wC-UnXMB</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1816087775</pqid></control><display><type>article</type><title>VOE: A new sparsity-based camera network placement framework</title><source>Elsevier ScienceDirect Journals</source><creator>Fu, Yi-Ge ; Zhou, Jie</creator><creatorcontrib>Fu, Yi-Ge ; Zhou, Jie</creatorcontrib><description>In this paper, we propose a stepwise sparsity-based framework for camera network placement. Unlike most previous methods which are developed for specific tasks, our approach is universal and can generalize well for different application scenarios. There are three steps in our approach: visibility analysis, optimization and evaluation (VOE), which are employed sequentially and iteratively. First, we use a cascaded visibility filter model to construct a visibility matrix, where each column describes the appearance representation of the surveillance area. Then, we formulate camera network layout as a sparse representation problem, and employ an l1-optimization algorithm to obtain a feasible solution. Our framework is general enough and applicable to various objectives in practical applications. Experiment results are presented to show the effectiveness and efficiency of the proposed framework.</description><identifier>ISSN: 0925-2312</identifier><identifier>EISSN: 1872-8286</identifier><identifier>DOI: 10.1016/j.neucom.2016.02.065</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Camera network placement ; Cameras ; Cascading filter model ; Computational efficiency ; Mathematical models ; Networks ; Placement ; Sparsity ; Stepwise framework ; Surveillance ; Tasks ; Visibility</subject><ispartof>Neurocomputing (Amsterdam), 2016-07, Vol.197, p.184-194</ispartof><rights>2016 Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c288t-57ad4095757b3c0a65d65d67a66f6a08d62541098c70d11f788df62745606ee93</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0925231216003660$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids></links><search><creatorcontrib>Fu, Yi-Ge</creatorcontrib><creatorcontrib>Zhou, Jie</creatorcontrib><title>VOE: A new sparsity-based camera network placement framework</title><title>Neurocomputing (Amsterdam)</title><description>In this paper, we propose a stepwise sparsity-based framework for camera network placement. Unlike most previous methods which are developed for specific tasks, our approach is universal and can generalize well for different application scenarios. There are three steps in our approach: visibility analysis, optimization and evaluation (VOE), which are employed sequentially and iteratively. First, we use a cascaded visibility filter model to construct a visibility matrix, where each column describes the appearance representation of the surveillance area. Then, we formulate camera network layout as a sparse representation problem, and employ an l1-optimization algorithm to obtain a feasible solution. Our framework is general enough and applicable to various objectives in practical applications. Experiment results are presented to show the effectiveness and efficiency of the proposed framework.</description><subject>Camera network placement</subject><subject>Cameras</subject><subject>Cascading filter model</subject><subject>Computational efficiency</subject><subject>Mathematical models</subject><subject>Networks</subject><subject>Placement</subject><subject>Sparsity</subject><subject>Stepwise framework</subject><subject>Surveillance</subject><subject>Tasks</subject><subject>Visibility</subject><issn>0925-2312</issn><issn>1872-8286</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><recordid>eNp9kFtLAzEQhYMoWKv_wId99GXXSbq5rIhQSr1AoS_qa0iTWUjdm8nW0n_vLuuzMDDMnDMH5iPklkJGgYr7fdbgwbZ1xoYpA5aB4GdkRpVkqWJKnJMZFIynbEHZJbmKcQ9AJWXFjDx-btcPyTJp8JjEzoTo-1O6MxFdYk2NwQxKf2zDV9JVxmKNTZ-UYVDG3TW5KE0V8eavz8nH8_p99Zputi9vq-UmtUypPuXSuBwKLrncLSwYwd1Y0ghRCgPKCcZzCoWyEhylpVTKlYLJnAsQiMViTu6m3C603weMva59tFhVpsH2EDVVVICSUvLBmk9WG9oYA5a6C7424aQp6BGW3usJlh5haWB6gDWcPU1nOLzx4zHoaD02Fp0PaHvtWv9_wC-UnXMB</recordid><startdate>20160712</startdate><enddate>20160712</enddate><creator>Fu, Yi-Ge</creator><creator>Zhou, Jie</creator><general>Elsevier B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20160712</creationdate><title>VOE: A new sparsity-based camera network placement framework</title><author>Fu, Yi-Ge ; Zhou, Jie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c288t-57ad4095757b3c0a65d65d67a66f6a08d62541098c70d11f788df62745606ee93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Camera network placement</topic><topic>Cameras</topic><topic>Cascading filter model</topic><topic>Computational efficiency</topic><topic>Mathematical models</topic><topic>Networks</topic><topic>Placement</topic><topic>Sparsity</topic><topic>Stepwise framework</topic><topic>Surveillance</topic><topic>Tasks</topic><topic>Visibility</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fu, Yi-Ge</creatorcontrib><creatorcontrib>Zhou, Jie</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Neurocomputing (Amsterdam)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Fu, Yi-Ge</au><au>Zhou, Jie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>VOE: A new sparsity-based camera network placement framework</atitle><jtitle>Neurocomputing (Amsterdam)</jtitle><date>2016-07-12</date><risdate>2016</risdate><volume>197</volume><spage>184</spage><epage>194</epage><pages>184-194</pages><issn>0925-2312</issn><eissn>1872-8286</eissn><abstract>In this paper, we propose a stepwise sparsity-based framework for camera network placement. Unlike most previous methods which are developed for specific tasks, our approach is universal and can generalize well for different application scenarios. There are three steps in our approach: visibility analysis, optimization and evaluation (VOE), which are employed sequentially and iteratively. First, we use a cascaded visibility filter model to construct a visibility matrix, where each column describes the appearance representation of the surveillance area. Then, we formulate camera network layout as a sparse representation problem, and employ an l1-optimization algorithm to obtain a feasible solution. Our framework is general enough and applicable to various objectives in practical applications. Experiment results are presented to show the effectiveness and efficiency of the proposed framework.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.neucom.2016.02.065</doi><tpages>11</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0925-2312
ispartof Neurocomputing (Amsterdam), 2016-07, Vol.197, p.184-194
issn 0925-2312
1872-8286
language eng
recordid cdi_proquest_miscellaneous_1816087775
source Elsevier ScienceDirect Journals
subjects Camera network placement
Cameras
Cascading filter model
Computational efficiency
Mathematical models
Networks
Placement
Sparsity
Stepwise framework
Surveillance
Tasks
Visibility
title VOE: A new sparsity-based camera network placement framework
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T03%3A33%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=VOE:%20A%20new%20sparsity-based%20camera%20network%20placement%20framework&rft.jtitle=Neurocomputing%20(Amsterdam)&rft.au=Fu,%20Yi-Ge&rft.date=2016-07-12&rft.volume=197&rft.spage=184&rft.epage=194&rft.pages=184-194&rft.issn=0925-2312&rft.eissn=1872-8286&rft_id=info:doi/10.1016/j.neucom.2016.02.065&rft_dat=%3Cproquest_cross%3E1816087775%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1816087775&rft_id=info:pmid/&rft_els_id=S0925231216003660&rfr_iscdi=true