Deep Insights into Convolutional Networks for Video Recognition

As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision 2020-02, Vol.128 (2), p.420-437
Hauptverfasser: Feichtenhofer, Christoph, Pinz, Axel, Wildes, Richard P., Zisserman, Andrew
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 437
container_issue 2
container_start_page 420
container_title International journal of computer vision
container_volume 128
creator Feichtenhofer, Christoph
Pinz, Axel
Wildes, Richard P.
Zisserman, Andrew
description As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representation of models that have been trained to recognize actions in video. We visualize multiple two-stream architectures to show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncrasies of training data and to explain failure cases of the system.
doi_str_mv 10.1007/s11263-019-01225-w
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2352397993</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A614152469</galeid><sourcerecordid>A614152469</sourcerecordid><originalsourceid>FETCH-LOGICAL-c436t-9998db7abc37325bf57fe1c1e3188c3ff923f606e3fab09089545ee1b3262ac73</originalsourceid><addsrcrecordid>eNp9kUtLAzEQx4MoWB9fwNOCJw9b89jsbk5S6qsgCr6uIZtO1tR2U5PU6rc3uoL0IsMwMPx-w8AfoSOChwTj6jQQQkuWYyJSU8rz9RYaEF6xnBSYb6MBFhTnvBRkF-2FMMMY05qyATo7B1hmky7Y9iWGzHbRZWPXvbv5KlrXqXl2C3Ht_GvIjPPZs52Cy-5Bu7az38AB2jFqHuDwd-6jp8uLx_F1fnN3NRmPbnJdsDLmQoh62lSq0axilDeGVwaIJsBIXWtmjKDMlLgEZlSDBa4FLzgAaRgtqdIV20fH_d2ld28rCFHO3Mqn_4KkjFMmKiFYooY91ao5SNsZF73SqaawsNp1YGzaj0pSEE6LUiThZENITISP2KpVCHLycL_J0p7V3oXgwciltwvlPyXB8jsF2acgUwryJwW5ThLrpZDgrgX_9_c_1hdDBIkU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2352397993</pqid></control><display><type>article</type><title>Deep Insights into Convolutional Networks for Video Recognition</title><source>SpringerLink Journals - AutoHoldings</source><creator>Feichtenhofer, Christoph ; Pinz, Axel ; Wildes, Richard P. ; Zisserman, Andrew</creator><creatorcontrib>Feichtenhofer, Christoph ; Pinz, Axel ; Wildes, Richard P. ; Zisserman, Andrew</creatorcontrib><description>As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representation of models that have been trained to recognize actions in video. We visualize multiple two-stream architectures to show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncrasies of training data and to explain failure cases of the system.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-019-01225-w</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Analysis ; Artificial Intelligence ; Computer Imaging ; Computer Science ; Computer vision ; Human motion ; Image Processing and Computer Vision ; Machine vision ; Object motion ; Object recognition ; Pattern Recognition ; Pattern Recognition and Graphics ; Representations ; Vision</subject><ispartof>International journal of computer vision, 2020-02, Vol.128 (2), p.420-437</ispartof><rights>The Author(s) 2019</rights><rights>COPYRIGHT 2020 Springer</rights><rights>International Journal of Computer Vision is a copyright of Springer, (2019). All Rights Reserved. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c436t-9998db7abc37325bf57fe1c1e3188c3ff923f606e3fab09089545ee1b3262ac73</citedby><cites>FETCH-LOGICAL-c436t-9998db7abc37325bf57fe1c1e3188c3ff923f606e3fab09089545ee1b3262ac73</cites><orcidid>0000-0001-9756-7238</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-019-01225-w$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-019-01225-w$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Feichtenhofer, Christoph</creatorcontrib><creatorcontrib>Pinz, Axel</creatorcontrib><creatorcontrib>Wildes, Richard P.</creatorcontrib><creatorcontrib>Zisserman, Andrew</creatorcontrib><title>Deep Insights into Convolutional Networks for Video Recognition</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representation of models that have been trained to recognize actions in video. We visualize multiple two-stream architectures to show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncrasies of training data and to explain failure cases of the system.</description><subject>Analysis</subject><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Computer vision</subject><subject>Human motion</subject><subject>Image Processing and Computer Vision</subject><subject>Machine vision</subject><subject>Object motion</subject><subject>Object recognition</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Representations</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kUtLAzEQx4MoWB9fwNOCJw9b89jsbk5S6qsgCr6uIZtO1tR2U5PU6rc3uoL0IsMwMPx-w8AfoSOChwTj6jQQQkuWYyJSU8rz9RYaEF6xnBSYb6MBFhTnvBRkF-2FMMMY05qyATo7B1hmky7Y9iWGzHbRZWPXvbv5KlrXqXl2C3Ht_GvIjPPZs52Cy-5Bu7az38AB2jFqHuDwd-6jp8uLx_F1fnN3NRmPbnJdsDLmQoh62lSq0axilDeGVwaIJsBIXWtmjKDMlLgEZlSDBa4FLzgAaRgtqdIV20fH_d2ld28rCFHO3Mqn_4KkjFMmKiFYooY91ao5SNsZF73SqaawsNp1YGzaj0pSEE6LUiThZENITISP2KpVCHLycL_J0p7V3oXgwciltwvlPyXB8jsF2acgUwryJwW5ThLrpZDgrgX_9_c_1hdDBIkU</recordid><startdate>20200201</startdate><enddate>20200201</enddate><creator>Feichtenhofer, Christoph</creator><creator>Pinz, Axel</creator><creator>Wildes, Richard P.</creator><creator>Zisserman, Andrew</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-9756-7238</orcidid></search><sort><creationdate>20200201</creationdate><title>Deep Insights into Convolutional Networks for Video Recognition</title><author>Feichtenhofer, Christoph ; Pinz, Axel ; Wildes, Richard P. ; Zisserman, Andrew</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c436t-9998db7abc37325bf57fe1c1e3188c3ff923f606e3fab09089545ee1b3262ac73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Analysis</topic><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Computer vision</topic><topic>Human motion</topic><topic>Image Processing and Computer Vision</topic><topic>Machine vision</topic><topic>Object motion</topic><topic>Object recognition</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Representations</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Feichtenhofer, Christoph</creatorcontrib><creatorcontrib>Pinz, Axel</creatorcontrib><creatorcontrib>Wildes, Richard P.</creatorcontrib><creatorcontrib>Zisserman, Andrew</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Feichtenhofer, Christoph</au><au>Pinz, Axel</au><au>Wildes, Richard P.</au><au>Zisserman, Andrew</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Insights into Convolutional Networks for Video Recognition</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2020-02-01</date><risdate>2020</risdate><volume>128</volume><issue>2</issue><spage>420</spage><epage>437</epage><pages>420-437</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representation of models that have been trained to recognize actions in video. We visualize multiple two-stream architectures to show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncrasies of training data and to explain failure cases of the system.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-019-01225-w</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0001-9756-7238</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2020-02, Vol.128 (2), p.420-437
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_journals_2352397993
source SpringerLink Journals - AutoHoldings
subjects Analysis
Artificial Intelligence
Computer Imaging
Computer Science
Computer vision
Human motion
Image Processing and Computer Vision
Machine vision
Object motion
Object recognition
Pattern Recognition
Pattern Recognition and Graphics
Representations
Vision
title Deep Insights into Convolutional Networks for Video Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T07%3A43%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Insights%20into%20Convolutional%20Networks%20for%20Video%20Recognition&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Feichtenhofer,%20Christoph&rft.date=2020-02-01&rft.volume=128&rft.issue=2&rft.spage=420&rft.epage=437&rft.pages=420-437&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-019-01225-w&rft_dat=%3Cgale_proqu%3EA614152469%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2352397993&rft_id=info:pmid/&rft_galeid=A614152469&rfr_iscdi=true