Location Dependency in Video Prediction

International Conference on Artificial Neural Networks. Springer, Cham, 2018 Deep convolutional neural networks are used to address many computer vision problems, including video prediction. The task of video prediction requires analyzing the video frames, temporally and spatially, and constructing...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Azizi, Niloofar, Farazi, Hafez, Behnke, Sven
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Azizi, Niloofar
Farazi, Hafez
Behnke, Sven
description International Conference on Artificial Neural Networks. Springer, Cham, 2018 Deep convolutional neural networks are used to address many computer vision problems, including video prediction. The task of video prediction requires analyzing the video frames, temporally and spatially, and constructing a model of how the environment evolves. Convolutional neural networks are spatially invariant, though, which prevents them from modeling location-dependent patterns. In this work, the authors propose location-biased convolutional layers to overcome this limitation. The effectiveness of location bias is evaluated on two architectures: Video Ladder Network (VLN) and Convolutional redictive Gating Pyramid (Conv-PGP). The results indicate that encoding location-dependent features is crucial for the task of video prediction. Our proposed methods significantly outperform spatially invariant models.
doi_str_mv 10.48550/arxiv.1810.04937
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1810_04937</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1810_04937</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-4c97d8bef68f90aca035c9fca0bec84ddc94c4953d91e5be0c45c9d91ae299753</originalsourceid><addsrcrecordid>eNotjj0LwjAURbM4iPoDnOzm1JqaxOSN4jcUdCiuJX15hYC2UkXsv7etTpd7D1wOY9OYR9IoxRe2_vh3FJt24BKEHrJ5UqF9-aoMtvSg0lGJTeDL4OodVcGlJuexw2M2KOztSZN_jli636WbY5icD6fNOgntSutQImhncipWpgBu0XKhEIo2c0IjnUOQKEEJBzGpnDjKlrfF0hJAKzFis99tb5o9an-3dZN1xllvLL7MhjtZ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Location Dependency in Video Prediction</title><source>arXiv.org</source><creator>Azizi, Niloofar ; Farazi, Hafez ; Behnke, Sven</creator><creatorcontrib>Azizi, Niloofar ; Farazi, Hafez ; Behnke, Sven</creatorcontrib><description>International Conference on Artificial Neural Networks. Springer, Cham, 2018 Deep convolutional neural networks are used to address many computer vision problems, including video prediction. The task of video prediction requires analyzing the video frames, temporally and spatially, and constructing a model of how the environment evolves. Convolutional neural networks are spatially invariant, though, which prevents them from modeling location-dependent patterns. In this work, the authors propose location-biased convolutional layers to overcome this limitation. The effectiveness of location bias is evaluated on two architectures: Video Ladder Network (VLN) and Convolutional redictive Gating Pyramid (Conv-PGP). The results indicate that encoding location-dependent features is crucial for the task of video prediction. Our proposed methods significantly outperform spatially invariant models.</description><identifier>DOI: 10.48550/arxiv.1810.04937</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2018-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1810.04937$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1810.04937$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Azizi, Niloofar</creatorcontrib><creatorcontrib>Farazi, Hafez</creatorcontrib><creatorcontrib>Behnke, Sven</creatorcontrib><title>Location Dependency in Video Prediction</title><description>International Conference on Artificial Neural Networks. Springer, Cham, 2018 Deep convolutional neural networks are used to address many computer vision problems, including video prediction. The task of video prediction requires analyzing the video frames, temporally and spatially, and constructing a model of how the environment evolves. Convolutional neural networks are spatially invariant, though, which prevents them from modeling location-dependent patterns. In this work, the authors propose location-biased convolutional layers to overcome this limitation. The effectiveness of location bias is evaluated on two architectures: Video Ladder Network (VLN) and Convolutional redictive Gating Pyramid (Conv-PGP). The results indicate that encoding location-dependent features is crucial for the task of video prediction. Our proposed methods significantly outperform spatially invariant models.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjj0LwjAURbM4iPoDnOzm1JqaxOSN4jcUdCiuJX15hYC2UkXsv7etTpd7D1wOY9OYR9IoxRe2_vh3FJt24BKEHrJ5UqF9-aoMtvSg0lGJTeDL4OodVcGlJuexw2M2KOztSZN_jli636WbY5icD6fNOgntSutQImhncipWpgBu0XKhEIo2c0IjnUOQKEEJBzGpnDjKlrfF0hJAKzFis99tb5o9an-3dZN1xllvLL7MhjtZ</recordid><startdate>20181011</startdate><enddate>20181011</enddate><creator>Azizi, Niloofar</creator><creator>Farazi, Hafez</creator><creator>Behnke, Sven</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20181011</creationdate><title>Location Dependency in Video Prediction</title><author>Azizi, Niloofar ; Farazi, Hafez ; Behnke, Sven</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-4c97d8bef68f90aca035c9fca0bec84ddc94c4953d91e5be0c45c9d91ae299753</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Azizi, Niloofar</creatorcontrib><creatorcontrib>Farazi, Hafez</creatorcontrib><creatorcontrib>Behnke, Sven</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Azizi, Niloofar</au><au>Farazi, Hafez</au><au>Behnke, Sven</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Location Dependency in Video Prediction</atitle><date>2018-10-11</date><risdate>2018</risdate><abstract>International Conference on Artificial Neural Networks. Springer, Cham, 2018 Deep convolutional neural networks are used to address many computer vision problems, including video prediction. The task of video prediction requires analyzing the video frames, temporally and spatially, and constructing a model of how the environment evolves. Convolutional neural networks are spatially invariant, though, which prevents them from modeling location-dependent patterns. In this work, the authors propose location-biased convolutional layers to overcome this limitation. The effectiveness of location bias is evaluated on two architectures: Video Ladder Network (VLN) and Convolutional redictive Gating Pyramid (Conv-PGP). The results indicate that encoding location-dependent features is crucial for the task of video prediction. Our proposed methods significantly outperform spatially invariant models.</abstract><doi>10.48550/arxiv.1810.04937</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1810.04937
ispartof
issn
language eng
recordid cdi_arxiv_primary_1810_04937
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Location Dependency in Video Prediction
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T11%3A53%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Location%20Dependency%20in%20Video%20Prediction&rft.au=Azizi,%20Niloofar&rft.date=2018-10-11&rft_id=info:doi/10.48550/arxiv.1810.04937&rft_dat=%3Carxiv_GOX%3E1810_04937%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true