Unsupervised learning of object semantic parts from internal states of CNNs by population encoding
We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification. This work provides a new unsupervised method to learn semantic parts and gives new understanding of the internal represen...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2016-11 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Wang, Jianyu Zhang, Zhishuai Xie, Cihang Vittal Premachandran Yuille, Alan |
description | We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification. This work provides a new unsupervised method to learn semantic parts and gives new understanding of the internal representations of CNNs. Our technique is based on the hypothesis that semantic parts are represented by populations of neurons rather than by single filters. We propose a clustering technique to extract part representations, which we call Visual Concepts. We show that visual concepts are semantically coherent in that they represent semantic parts, and visually coherent in that corresponding image patches appear very similar. Also, visual concepts provide full spatial coverage of the parts of an object, rather than a few sparse parts as is typically found in keypoint annotations. Furthermore, We treat single visual concept as part detector and evaluate it for keypoint detection using the PASCAL3D+ dataset and for part detection using our newly annotated ImageNetPart dataset. The experiments demonstrate that visual concepts can be used to detect parts. We also show that some visual concepts respond to several semantic parts, provided these parts are visually similar. Thus visual concepts have the essential properties: semantic meaning and detection capability. Note that our ImageNetPart dataset gives rich part annotations which cover the whole object, making it useful for other part-related applications. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2080885163</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2080885163</sourcerecordid><originalsourceid>FETCH-proquest_journals_20808851633</originalsourceid><addsrcrecordid>eNqNzEELgjAYgOERBEn5Hz7oLMwtzbsUnTzVOaZ-xmRua98M-vcl9AM6vZeHd8USIWWeVQchNiwlGjnnojyKopAJa2-WZo_hpQl7MKiC1fYBbgDXjthFIJyUjboDr0IkGIKbQNuIwSoDFFVEWnTdNATtG7zzs1FROwtoO9d_Zzu2HpQhTH_dsv35dK0vmQ_uOSPF--jmZUd3wSteVUVeSvmf-gDq7EY5</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2080885163</pqid></control><display><type>article</type><title>Unsupervised learning of object semantic parts from internal states of CNNs by population encoding</title><source>Free E- Journals</source><creator>Wang, Jianyu ; Zhang, Zhishuai ; Xie, Cihang ; Vittal Premachandran ; Yuille, Alan</creator><creatorcontrib>Wang, Jianyu ; Zhang, Zhishuai ; Xie, Cihang ; Vittal Premachandran ; Yuille, Alan</creatorcontrib><description>We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification. This work provides a new unsupervised method to learn semantic parts and gives new understanding of the internal representations of CNNs. Our technique is based on the hypothesis that semantic parts are represented by populations of neurons rather than by single filters. We propose a clustering technique to extract part representations, which we call Visual Concepts. We show that visual concepts are semantically coherent in that they represent semantic parts, and visually coherent in that corresponding image patches appear very similar. Also, visual concepts provide full spatial coverage of the parts of an object, rather than a few sparse parts as is typically found in keypoint annotations. Furthermore, We treat single visual concept as part detector and evaluate it for keypoint detection using the PASCAL3D+ dataset and for part detection using our newly annotated ImageNetPart dataset. The experiments demonstrate that visual concepts can be used to detect parts. We also show that some visual concepts respond to several semantic parts, provided these parts are visually similar. Thus visual concepts have the essential properties: semantic meaning and detection capability. Note that our ImageNetPart dataset gives rich part annotations which cover the whole object, making it useful for other part-related applications.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Clustering ; Datasets ; Representations ; Semantics ; Unsupervised learning</subject><ispartof>arXiv.org, 2016-11</ispartof><rights>2016. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Wang, Jianyu</creatorcontrib><creatorcontrib>Zhang, Zhishuai</creatorcontrib><creatorcontrib>Xie, Cihang</creatorcontrib><creatorcontrib>Vittal Premachandran</creatorcontrib><creatorcontrib>Yuille, Alan</creatorcontrib><title>Unsupervised learning of object semantic parts from internal states of CNNs by population encoding</title><title>arXiv.org</title><description>We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification. This work provides a new unsupervised method to learn semantic parts and gives new understanding of the internal representations of CNNs. Our technique is based on the hypothesis that semantic parts are represented by populations of neurons rather than by single filters. We propose a clustering technique to extract part representations, which we call Visual Concepts. We show that visual concepts are semantically coherent in that they represent semantic parts, and visually coherent in that corresponding image patches appear very similar. Also, visual concepts provide full spatial coverage of the parts of an object, rather than a few sparse parts as is typically found in keypoint annotations. Furthermore, We treat single visual concept as part detector and evaluate it for keypoint detection using the PASCAL3D+ dataset and for part detection using our newly annotated ImageNetPart dataset. The experiments demonstrate that visual concepts can be used to detect parts. We also show that some visual concepts respond to several semantic parts, provided these parts are visually similar. Thus visual concepts have the essential properties: semantic meaning and detection capability. Note that our ImageNetPart dataset gives rich part annotations which cover the whole object, making it useful for other part-related applications.</description><subject>Annotations</subject><subject>Clustering</subject><subject>Datasets</subject><subject>Representations</subject><subject>Semantics</subject><subject>Unsupervised learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNzEELgjAYgOERBEn5Hz7oLMwtzbsUnTzVOaZ-xmRua98M-vcl9AM6vZeHd8USIWWeVQchNiwlGjnnojyKopAJa2-WZo_hpQl7MKiC1fYBbgDXjthFIJyUjboDr0IkGIKbQNuIwSoDFFVEWnTdNATtG7zzs1FROwtoO9d_Zzu2HpQhTH_dsv35dK0vmQ_uOSPF--jmZUd3wSteVUVeSvmf-gDq7EY5</recordid><startdate>20161112</startdate><enddate>20161112</enddate><creator>Wang, Jianyu</creator><creator>Zhang, Zhishuai</creator><creator>Xie, Cihang</creator><creator>Vittal Premachandran</creator><creator>Yuille, Alan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20161112</creationdate><title>Unsupervised learning of object semantic parts from internal states of CNNs by population encoding</title><author>Wang, Jianyu ; Zhang, Zhishuai ; Xie, Cihang ; Vittal Premachandran ; Yuille, Alan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20808851633</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Annotations</topic><topic>Clustering</topic><topic>Datasets</topic><topic>Representations</topic><topic>Semantics</topic><topic>Unsupervised learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Jianyu</creatorcontrib><creatorcontrib>Zhang, Zhishuai</creatorcontrib><creatorcontrib>Xie, Cihang</creatorcontrib><creatorcontrib>Vittal Premachandran</creatorcontrib><creatorcontrib>Yuille, Alan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied & Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Jianyu</au><au>Zhang, Zhishuai</au><au>Xie, Cihang</au><au>Vittal Premachandran</au><au>Yuille, Alan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Unsupervised learning of object semantic parts from internal states of CNNs by population encoding</atitle><jtitle>arXiv.org</jtitle><date>2016-11-12</date><risdate>2016</risdate><eissn>2331-8422</eissn><abstract>We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification. This work provides a new unsupervised method to learn semantic parts and gives new understanding of the internal representations of CNNs. Our technique is based on the hypothesis that semantic parts are represented by populations of neurons rather than by single filters. We propose a clustering technique to extract part representations, which we call Visual Concepts. We show that visual concepts are semantically coherent in that they represent semantic parts, and visually coherent in that corresponding image patches appear very similar. Also, visual concepts provide full spatial coverage of the parts of an object, rather than a few sparse parts as is typically found in keypoint annotations. Furthermore, We treat single visual concept as part detector and evaluate it for keypoint detection using the PASCAL3D+ dataset and for part detection using our newly annotated ImageNetPart dataset. The experiments demonstrate that visual concepts can be used to detect parts. We also show that some visual concepts respond to several semantic parts, provided these parts are visually similar. Thus visual concepts have the essential properties: semantic meaning and detection capability. Note that our ImageNetPart dataset gives rich part annotations which cover the whole object, making it useful for other part-related applications.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2016-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2080885163 |
source | Free E- Journals |
subjects | Annotations Clustering Datasets Representations Semantics Unsupervised learning |
title | Unsupervised learning of object semantic parts from internal states of CNNs by population encoding |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-15T19%3A07%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Unsupervised%20learning%20of%20object%20semantic%20parts%20from%20internal%20states%20of%20CNNs%20by%20population%20encoding&rft.jtitle=arXiv.org&rft.au=Wang,%20Jianyu&rft.date=2016-11-12&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2080885163%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2080885163&rft_id=info:pmid/&rfr_iscdi=true |