PhyOT: Physics-informed object tracking in surveillance cameras
While deep learning has been very successful in computer vision, real world operating conditions such as lighting variation, background clutter, or occlusion hinder its accuracy across several tasks. Prior work has shown that hybrid models -- combining neural networks and heuristics/algorithms -- ca...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Kamtue, Kawisorn Moura, Jose M. F Sangpetch, Orathai Garcia, Paulo |
description | While deep learning has been very successful in computer vision, real world
operating conditions such as lighting variation, background clutter, or
occlusion hinder its accuracy across several tasks. Prior work has shown that
hybrid models -- combining neural networks and heuristics/algorithms -- can
outperform vanilla deep learning for several computer vision tasks, such as
classification or tracking. We consider the case of object tracking, and
evaluate a hybrid model (PhyOT) that conceptualizes deep neural networks as
``sensors'' in a Kalman filter setup, where prior knowledge, in the form of
Newtonian laws of motion, is used to fuse sensor observations and to perform
improved estimations. Our experiments combine three neural networks, performing
position, indirect velocity and acceleration estimation, respectively, and
evaluate such a formulation on two benchmark datasets: a warehouse security
camera dataset that we collected and annotated and a traffic camera open
dataset. Results suggest that our PhyOT can track objects in extreme conditions
that the state-of-the-art deep neural networks fail while its performance in
general cases does not degrade significantly from that of existing deep
learning approaches. Results also suggest that our PhyOT components are
generalizable and transferable. |
doi_str_mv | 10.48550/arxiv.2312.08650 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_08650</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_08650</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-e89f2f25b26f4d8a8bb9b0b77386f8b7016467b3659d4a54974f4d45483a74a13</originalsourceid><addsrcrecordid>eNotz71OwzAYhWEvDKj0ApjwDSQ49uefsCBUtbRSpTJkjz47NhiSFNltRe--f0zvcnSkh5DHipVgpGTPmP7ioeSi4iUzSrJ78vrxddw0L_ScHF0u4hi2afAd3dpv73Z0l9D9xPGTxpHmfTr42Pc4Ok8dDj5hfiB3Afvsp_-dkGYxb2bLYr15X83e1gUqzQpv6sADl5arAJ1BY21tmdVaGBWM1axSoLQVStYdoIRaw3kHEoxADViJCXm63V4F7W-KA6Zje5G0V4k4AbFPQtg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>PhyOT: Physics-informed object tracking in surveillance cameras</title><source>arXiv.org</source><creator>Kamtue, Kawisorn ; Moura, Jose M. F ; Sangpetch, Orathai ; Garcia, Paulo</creator><creatorcontrib>Kamtue, Kawisorn ; Moura, Jose M. F ; Sangpetch, Orathai ; Garcia, Paulo</creatorcontrib><description>While deep learning has been very successful in computer vision, real world
operating conditions such as lighting variation, background clutter, or
occlusion hinder its accuracy across several tasks. Prior work has shown that
hybrid models -- combining neural networks and heuristics/algorithms -- can
outperform vanilla deep learning for several computer vision tasks, such as
classification or tracking. We consider the case of object tracking, and
evaluate a hybrid model (PhyOT) that conceptualizes deep neural networks as
``sensors'' in a Kalman filter setup, where prior knowledge, in the form of
Newtonian laws of motion, is used to fuse sensor observations and to perform
improved estimations. Our experiments combine three neural networks, performing
position, indirect velocity and acceleration estimation, respectively, and
evaluate such a formulation on two benchmark datasets: a warehouse security
camera dataset that we collected and annotated and a traffic camera open
dataset. Results suggest that our PhyOT can track objects in extreme conditions
that the state-of-the-art deep neural networks fail while its performance in
general cases does not degrade significantly from that of existing deep
learning approaches. Results also suggest that our PhyOT components are
generalizable and transferable.</description><identifier>DOI: 10.48550/arxiv.2312.08650</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.08650$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.08650$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kamtue, Kawisorn</creatorcontrib><creatorcontrib>Moura, Jose M. F</creatorcontrib><creatorcontrib>Sangpetch, Orathai</creatorcontrib><creatorcontrib>Garcia, Paulo</creatorcontrib><title>PhyOT: Physics-informed object tracking in surveillance cameras</title><description>While deep learning has been very successful in computer vision, real world
operating conditions such as lighting variation, background clutter, or
occlusion hinder its accuracy across several tasks. Prior work has shown that
hybrid models -- combining neural networks and heuristics/algorithms -- can
outperform vanilla deep learning for several computer vision tasks, such as
classification or tracking. We consider the case of object tracking, and
evaluate a hybrid model (PhyOT) that conceptualizes deep neural networks as
``sensors'' in a Kalman filter setup, where prior knowledge, in the form of
Newtonian laws of motion, is used to fuse sensor observations and to perform
improved estimations. Our experiments combine three neural networks, performing
position, indirect velocity and acceleration estimation, respectively, and
evaluate such a formulation on two benchmark datasets: a warehouse security
camera dataset that we collected and annotated and a traffic camera open
dataset. Results suggest that our PhyOT can track objects in extreme conditions
that the state-of-the-art deep neural networks fail while its performance in
general cases does not degrade significantly from that of existing deep
learning approaches. Results also suggest that our PhyOT components are
generalizable and transferable.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAYhWEvDKj0ApjwDSQ49uefsCBUtbRSpTJkjz47NhiSFNltRe--f0zvcnSkh5DHipVgpGTPmP7ioeSi4iUzSrJ78vrxddw0L_ScHF0u4hi2afAd3dpv73Z0l9D9xPGTxpHmfTr42Pc4Ok8dDj5hfiB3Afvsp_-dkGYxb2bLYr15X83e1gUqzQpv6sADl5arAJ1BY21tmdVaGBWM1axSoLQVStYdoIRaw3kHEoxADViJCXm63V4F7W-KA6Zje5G0V4k4AbFPQtg</recordid><startdate>20231213</startdate><enddate>20231213</enddate><creator>Kamtue, Kawisorn</creator><creator>Moura, Jose M. F</creator><creator>Sangpetch, Orathai</creator><creator>Garcia, Paulo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231213</creationdate><title>PhyOT: Physics-informed object tracking in surveillance cameras</title><author>Kamtue, Kawisorn ; Moura, Jose M. F ; Sangpetch, Orathai ; Garcia, Paulo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-e89f2f25b26f4d8a8bb9b0b77386f8b7016467b3659d4a54974f4d45483a74a13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Kamtue, Kawisorn</creatorcontrib><creatorcontrib>Moura, Jose M. F</creatorcontrib><creatorcontrib>Sangpetch, Orathai</creatorcontrib><creatorcontrib>Garcia, Paulo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kamtue, Kawisorn</au><au>Moura, Jose M. F</au><au>Sangpetch, Orathai</au><au>Garcia, Paulo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>PhyOT: Physics-informed object tracking in surveillance cameras</atitle><date>2023-12-13</date><risdate>2023</risdate><abstract>While deep learning has been very successful in computer vision, real world
operating conditions such as lighting variation, background clutter, or
occlusion hinder its accuracy across several tasks. Prior work has shown that
hybrid models -- combining neural networks and heuristics/algorithms -- can
outperform vanilla deep learning for several computer vision tasks, such as
classification or tracking. We consider the case of object tracking, and
evaluate a hybrid model (PhyOT) that conceptualizes deep neural networks as
``sensors'' in a Kalman filter setup, where prior knowledge, in the form of
Newtonian laws of motion, is used to fuse sensor observations and to perform
improved estimations. Our experiments combine three neural networks, performing
position, indirect velocity and acceleration estimation, respectively, and
evaluate such a formulation on two benchmark datasets: a warehouse security
camera dataset that we collected and annotated and a traffic camera open
dataset. Results suggest that our PhyOT can track objects in extreme conditions
that the state-of-the-art deep neural networks fail while its performance in
general cases does not degrade significantly from that of existing deep
learning approaches. Results also suggest that our PhyOT components are
generalizable and transferable.</abstract><doi>10.48550/arxiv.2312.08650</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2312.08650 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2312_08650 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | PhyOT: Physics-informed object tracking in surveillance cameras |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T16%3A37%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=PhyOT:%20Physics-informed%20object%20tracking%20in%20surveillance%20cameras&rft.au=Kamtue,%20Kawisorn&rft.date=2023-12-13&rft_id=info:doi/10.48550/arxiv.2312.08650&rft_dat=%3Carxiv_GOX%3E2312_08650%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |