Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks
We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it p...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence 2019-09, Vol.41 (9), p.2236-2250 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2250 |
---|---|
container_issue | 9 |
container_start_page | 2236 |
container_title | IEEE transactions on pattern analysis and machine intelligence |
container_volume | 41 |
creator | Xue, Tianfan Wu, Jiajun Bouman, Katherine L. Freeman, William T. |
description | We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation. |
doi_str_mv | 10.1109/TPAMI.2018.2854726 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8409321</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8409321</ieee_id><sourcerecordid>2070239712</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-2291eb0145437bf3bb5d0b9f34b6d2d8abe04bb879410f24923cd247dd5a7ebe3</originalsourceid><addsrcrecordid>eNpdkE1P5DAMhiO0CGaBPwASqsRlLx0cJ22TvaFhYZGGD4mPa5W0rih0miFpQfPvt90ZOHDywY9f2w9jhxymnIM-fbg7u76aInA1RZXIDNMtNuFa6FgkQv9gE-Apxkqh2mU_Q3gB4DIBscN2BQBIlcGEPT3VoTdNdL5qzaIuwu_ovnPFswldXUQXfdd7ii6pJW-62rXRe22iuVmRpzKaeRdCNHPtu2v6sTvE3FD34fxr2GfblWkCHWzqHnu8-PMw-xvPby-vZmfzuBAJ72JEzcmOZ0mR2UpYm5RgdSWkTUsslbEE0lqVacmhQqlRFCXKrCwTk5Elscd-rXOX3r31FLp8UYeCmsa05PqQI2SAQmccB_TkG_riej8cPVCY6lRLRD5QuKaK8TtPVb709cL4Vc4hH63n_63no_V8Y30YOt5E93ZB5dfIp-YBOFoDNRF9tZUELYad_wAFGoYj</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2269694221</pqid></control><display><type>article</type><title>Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Xue, Tianfan ; Wu, Jiajun ; Bouman, Katherine L. ; Freeman, William T.</creator><creatorcontrib>Xue, Tianfan ; Wu, Jiajun ; Bouman, Katherine L. ; Freeman, William T.</creatorcontrib><description>We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2018.2854726</identifier><identifier>PMID: 30004870</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Computer & video games ; convolutional networks ; cross convolution ; Data models ; Feature maps ; frame synthesis ; Frames ; future prediction ; Image contrast ; Image generation ; Object motion ; Predictive models ; Probabilistic logic ; probabilistic modeling ; Probabilistic models ; Probability theory ; Synthesis ; Task analysis ; Training ; Two dimensional models ; Visualization</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2019-09, Vol.41 (9), p.2236-2250</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-2291eb0145437bf3bb5d0b9f34b6d2d8abe04bb879410f24923cd247dd5a7ebe3</citedby><cites>FETCH-LOGICAL-c351t-2291eb0145437bf3bb5d0b9f34b6d2d8abe04bb879410f24923cd247dd5a7ebe3</cites><orcidid>0000-0003-0077-4367 ; 0000-0002-4176-343X ; 0000-0002-2231-7995</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8409321$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27922,27923,54756</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8409321$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30004870$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Xue, Tianfan</creatorcontrib><creatorcontrib>Wu, Jiajun</creatorcontrib><creatorcontrib>Bouman, Katherine L.</creatorcontrib><creatorcontrib>Freeman, William T.</creatorcontrib><title>Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation.</description><subject>Computer & video games</subject><subject>convolutional networks</subject><subject>cross convolution</subject><subject>Data models</subject><subject>Feature maps</subject><subject>frame synthesis</subject><subject>Frames</subject><subject>future prediction</subject><subject>Image contrast</subject><subject>Image generation</subject><subject>Object motion</subject><subject>Predictive models</subject><subject>Probabilistic logic</subject><subject>probabilistic modeling</subject><subject>Probabilistic models</subject><subject>Probability theory</subject><subject>Synthesis</subject><subject>Task analysis</subject><subject>Training</subject><subject>Two dimensional models</subject><subject>Visualization</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1P5DAMhiO0CGaBPwASqsRlLx0cJ22TvaFhYZGGD4mPa5W0rih0miFpQfPvt90ZOHDywY9f2w9jhxymnIM-fbg7u76aInA1RZXIDNMtNuFa6FgkQv9gE-Apxkqh2mU_Q3gB4DIBscN2BQBIlcGEPT3VoTdNdL5qzaIuwu_ovnPFswldXUQXfdd7ii6pJW-62rXRe22iuVmRpzKaeRdCNHPtu2v6sTvE3FD34fxr2GfblWkCHWzqHnu8-PMw-xvPby-vZmfzuBAJ72JEzcmOZ0mR2UpYm5RgdSWkTUsslbEE0lqVacmhQqlRFCXKrCwTk5Elscd-rXOX3r31FLp8UYeCmsa05PqQI2SAQmccB_TkG_riej8cPVCY6lRLRD5QuKaK8TtPVb709cL4Vc4hH63n_63no_V8Y30YOt5E93ZB5dfIp-YBOFoDNRF9tZUELYad_wAFGoYj</recordid><startdate>20190901</startdate><enddate>20190901</enddate><creator>Xue, Tianfan</creator><creator>Wu, Jiajun</creator><creator>Bouman, Katherine L.</creator><creator>Freeman, William T.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-0077-4367</orcidid><orcidid>https://orcid.org/0000-0002-4176-343X</orcidid><orcidid>https://orcid.org/0000-0002-2231-7995</orcidid></search><sort><creationdate>20190901</creationdate><title>Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks</title><author>Xue, Tianfan ; Wu, Jiajun ; Bouman, Katherine L. ; Freeman, William T.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-2291eb0145437bf3bb5d0b9f34b6d2d8abe04bb879410f24923cd247dd5a7ebe3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer & video games</topic><topic>convolutional networks</topic><topic>cross convolution</topic><topic>Data models</topic><topic>Feature maps</topic><topic>frame synthesis</topic><topic>Frames</topic><topic>future prediction</topic><topic>Image contrast</topic><topic>Image generation</topic><topic>Object motion</topic><topic>Predictive models</topic><topic>Probabilistic logic</topic><topic>probabilistic modeling</topic><topic>Probabilistic models</topic><topic>Probability theory</topic><topic>Synthesis</topic><topic>Task analysis</topic><topic>Training</topic><topic>Two dimensional models</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xue, Tianfan</creatorcontrib><creatorcontrib>Wu, Jiajun</creatorcontrib><creatorcontrib>Bouman, Katherine L.</creatorcontrib><creatorcontrib>Freeman, William T.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xue, Tianfan</au><au>Wu, Jiajun</au><au>Bouman, Katherine L.</au><au>Freeman, William T.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2019-09-01</date><risdate>2019</risdate><volume>41</volume><issue>9</issue><spage>2236</spage><epage>2250</epage><pages>2236-2250</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>30004870</pmid><doi>10.1109/TPAMI.2018.2854726</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-0077-4367</orcidid><orcidid>https://orcid.org/0000-0002-4176-343X</orcidid><orcidid>https://orcid.org/0000-0002-2231-7995</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0162-8828 |
ispartof | IEEE transactions on pattern analysis and machine intelligence, 2019-09, Vol.41 (9), p.2236-2250 |
issn | 0162-8828 1939-3539 2160-9292 |
language | eng |
recordid | cdi_ieee_primary_8409321 |
source | IEEE Electronic Library (IEL) |
subjects | Computer & video games convolutional networks cross convolution Data models Feature maps frame synthesis Frames future prediction Image contrast Image generation Object motion Predictive models Probabilistic logic probabilistic modeling Probabilistic models Probability theory Synthesis Task analysis Training Two dimensional models Visualization |
title | Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T19%3A51%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Visual%20Dynamics:%20Stochastic%20Future%20Generation%20via%20Layered%20Cross%20Convolutional%20Networks&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Xue,%20Tianfan&rft.date=2019-09-01&rft.volume=41&rft.issue=9&rft.spage=2236&rft.epage=2250&rft.pages=2236-2250&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2018.2854726&rft_dat=%3Cproquest_RIE%3E2070239712%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2269694221&rft_id=info:pmid/30004870&rft_ieee_id=8409321&rfr_iscdi=true |