Time Perception Machine: Temporal Point Processes for the When, Where and What of Activity Prediction

Numerous powerful point process models have been developed to understand temporal patterns in sequential data from fields such as health-care, electronic commerce, social networks, and natural disaster forecasting. In this paper, we develop novel models for learning the temporal distribution of huma...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhong, Yatao, Xu, Bicheng, Zhou, Guang-Tong, Bornn, Luke, Mori, Greg
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhong, Yatao
Xu, Bicheng
Zhou, Guang-Tong
Bornn, Luke
Mori, Greg
description Numerous powerful point process models have been developed to understand temporal patterns in sequential data from fields such as health-care, electronic commerce, social networks, and natural disaster forecasting. In this paper, we develop novel models for learning the temporal distribution of human activities in streaming data (e.g., videos and person trajectories). We propose an integrated framework of neural networks and temporal point processes for predicting when the next activity will happen. Because point processes are limited to taking event frames as input, we propose a simple yet effective mechanism to extract features at frames of interest while also preserving the rich information in the remaining frames. We evaluate our model on two challenging datasets. The results show that our model outperforms traditional statistical point process approaches significantly, demonstrating its effectiveness in capturing the underlying temporal dynamics as well as the correlation within sequential activities. Furthermore, we also extend our model to a joint estimation framework for predicting the timing, spatial location, and category of the activity simultaneously, to answer the when, where, and what of activity prediction.
doi_str_mv 10.48550/arxiv.1808.04063
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1808_04063</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1808_04063</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-4fe8f3c39e8212d1b7a3a16883948217d67ab1f0c31a6e8d5ff973535426dcc83</originalsourceid><addsrcrecordid>eNotj01qwzAUhLXpoqQ9QFfVAWpXsmRZ7i6E_kFCvTB0aV6kJyyILSOL0Ny-dtrNzDAwAx8hD5zlUpcle4b4488510znTDIlbgm2fkDaYDQ4JR9GegDT-xFfaIvDFCKcaBP8mGgTg8F5xpm6EGnqkX73OD6tGpHCaJcEiQZHtyb5s0-XZYLWm_X1jtw4OM14_-8b0r69truPbP_1_rnb7jNQlcikQ-2EETXqgheWHysQwJXWopZLU1lVwZE7ZgQHhdqWztWVKEUpC2WN0WJDHv9ur5zdFP0A8dKtvN2VV_wCEU9QMg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Time Perception Machine: Temporal Point Processes for the When, Where and What of Activity Prediction</title><source>arXiv.org</source><creator>Zhong, Yatao ; Xu, Bicheng ; Zhou, Guang-Tong ; Bornn, Luke ; Mori, Greg</creator><creatorcontrib>Zhong, Yatao ; Xu, Bicheng ; Zhou, Guang-Tong ; Bornn, Luke ; Mori, Greg</creatorcontrib><description>Numerous powerful point process models have been developed to understand temporal patterns in sequential data from fields such as health-care, electronic commerce, social networks, and natural disaster forecasting. In this paper, we develop novel models for learning the temporal distribution of human activities in streaming data (e.g., videos and person trajectories). We propose an integrated framework of neural networks and temporal point processes for predicting when the next activity will happen. Because point processes are limited to taking event frames as input, we propose a simple yet effective mechanism to extract features at frames of interest while also preserving the rich information in the remaining frames. We evaluate our model on two challenging datasets. The results show that our model outperforms traditional statistical point process approaches significantly, demonstrating its effectiveness in capturing the underlying temporal dynamics as well as the correlation within sequential activities. Furthermore, we also extend our model to a joint estimation framework for predicting the timing, spatial location, and category of the activity simultaneously, to answer the when, where, and what of activity prediction.</description><identifier>DOI: 10.48550/arxiv.1808.04063</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2018-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1808.04063$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1808.04063$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhong, Yatao</creatorcontrib><creatorcontrib>Xu, Bicheng</creatorcontrib><creatorcontrib>Zhou, Guang-Tong</creatorcontrib><creatorcontrib>Bornn, Luke</creatorcontrib><creatorcontrib>Mori, Greg</creatorcontrib><title>Time Perception Machine: Temporal Point Processes for the When, Where and What of Activity Prediction</title><description>Numerous powerful point process models have been developed to understand temporal patterns in sequential data from fields such as health-care, electronic commerce, social networks, and natural disaster forecasting. In this paper, we develop novel models for learning the temporal distribution of human activities in streaming data (e.g., videos and person trajectories). We propose an integrated framework of neural networks and temporal point processes for predicting when the next activity will happen. Because point processes are limited to taking event frames as input, we propose a simple yet effective mechanism to extract features at frames of interest while also preserving the rich information in the remaining frames. We evaluate our model on two challenging datasets. The results show that our model outperforms traditional statistical point process approaches significantly, demonstrating its effectiveness in capturing the underlying temporal dynamics as well as the correlation within sequential activities. Furthermore, we also extend our model to a joint estimation framework for predicting the timing, spatial location, and category of the activity simultaneously, to answer the when, where, and what of activity prediction.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj01qwzAUhLXpoqQ9QFfVAWpXsmRZ7i6E_kFCvTB0aV6kJyyILSOL0Ny-dtrNzDAwAx8hD5zlUpcle4b4488510znTDIlbgm2fkDaYDQ4JR9GegDT-xFfaIvDFCKcaBP8mGgTg8F5xpm6EGnqkX73OD6tGpHCaJcEiQZHtyb5s0-XZYLWm_X1jtw4OM14_-8b0r69truPbP_1_rnb7jNQlcikQ-2EETXqgheWHysQwJXWopZLU1lVwZE7ZgQHhdqWztWVKEUpC2WN0WJDHv9ur5zdFP0A8dKtvN2VV_wCEU9QMg</recordid><startdate>20180813</startdate><enddate>20180813</enddate><creator>Zhong, Yatao</creator><creator>Xu, Bicheng</creator><creator>Zhou, Guang-Tong</creator><creator>Bornn, Luke</creator><creator>Mori, Greg</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180813</creationdate><title>Time Perception Machine: Temporal Point Processes for the When, Where and What of Activity Prediction</title><author>Zhong, Yatao ; Xu, Bicheng ; Zhou, Guang-Tong ; Bornn, Luke ; Mori, Greg</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-4fe8f3c39e8212d1b7a3a16883948217d67ab1f0c31a6e8d5ff973535426dcc83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhong, Yatao</creatorcontrib><creatorcontrib>Xu, Bicheng</creatorcontrib><creatorcontrib>Zhou, Guang-Tong</creatorcontrib><creatorcontrib>Bornn, Luke</creatorcontrib><creatorcontrib>Mori, Greg</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhong, Yatao</au><au>Xu, Bicheng</au><au>Zhou, Guang-Tong</au><au>Bornn, Luke</au><au>Mori, Greg</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Time Perception Machine: Temporal Point Processes for the When, Where and What of Activity Prediction</atitle><date>2018-08-13</date><risdate>2018</risdate><abstract>Numerous powerful point process models have been developed to understand temporal patterns in sequential data from fields such as health-care, electronic commerce, social networks, and natural disaster forecasting. In this paper, we develop novel models for learning the temporal distribution of human activities in streaming data (e.g., videos and person trajectories). We propose an integrated framework of neural networks and temporal point processes for predicting when the next activity will happen. Because point processes are limited to taking event frames as input, we propose a simple yet effective mechanism to extract features at frames of interest while also preserving the rich information in the remaining frames. We evaluate our model on two challenging datasets. The results show that our model outperforms traditional statistical point process approaches significantly, demonstrating its effectiveness in capturing the underlying temporal dynamics as well as the correlation within sequential activities. Furthermore, we also extend our model to a joint estimation framework for predicting the timing, spatial location, and category of the activity simultaneously, to answer the when, where, and what of activity prediction.</abstract><doi>10.48550/arxiv.1808.04063</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1808.04063
ispartof
issn
language eng
recordid cdi_arxiv_primary_1808_04063
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Time Perception Machine: Temporal Point Processes for the When, Where and What of Activity Prediction
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-15T08%3A48%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Time%20Perception%20Machine:%20Temporal%20Point%20Processes%20for%20the%20When,%20Where%20and%20What%20of%20Activity%20Prediction&rft.au=Zhong,%20Yatao&rft.date=2018-08-13&rft_id=info:doi/10.48550/arxiv.1808.04063&rft_dat=%3Carxiv_GOX%3E1808_04063%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true