Color for object recognition: Hue and chroma sensitivity in the deep features of convolutional neural networks

In this work, we examined the color tuning of units in the hidden layers of AlexNet, VGG-16 and VGG-19 convolutional neural networks and their relevance for the successful recognition of an object. We first selected the patches for which the units are maximally responsive among the 1.2 M images of t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Vision research (Oxford) 2021-05, Vol.182, p.89-100
Hauptverfasser: Flachot, Alban, Gegenfurtner, Karl R.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 100
container_issue
container_start_page 89
container_title Vision research (Oxford)
container_volume 182
creator Flachot, Alban
Gegenfurtner, Karl R.
description In this work, we examined the color tuning of units in the hidden layers of AlexNet, VGG-16 and VGG-19 convolutional neural networks and their relevance for the successful recognition of an object. We first selected the patches for which the units are maximally responsive among the 1.2 M images of the ImageNet training dataset. We segmented these patches using a k-means clustering algorithm on their chromatic distribution. Then we independently varied the color of these segments, both in hue and chroma, to measure the unit’s chromatic tuning. The models exhibited properties at times similar or opposed to the known chromatic processing of biological system. We found that, similarly to the most anterior occipital visual areas in primates, the last convolutional layer exhibited high color sensitivity. We also found the gradual emergence of single to double opponent kernels. Contrary to cells in the visual system, however, these kernels were selective for hues that gradually transit from being broadly distributed in early layers, to mainly falling along the blue-orange axis in late layers. In addition, we found that the classification performance of our models varies as we change the color of our stimuli following the models’ kernels properties. Performance was highest for colors the kernels maximally responded to, and images responsible for the activation of color sensitive kernels were more likely to be mis-classified as we changed their color. These observations were shared by all three networks, thus suggesting that they are general properties of current convolutional neural networks trained for object recognition.
doi_str_mv 10.1016/j.visres.2020.09.010
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2492283914</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S004269892100016X</els_id><sourcerecordid>2492283914</sourcerecordid><originalsourceid>FETCH-LOGICAL-c462t-a18cfad7780a821f9f344fd65317f24bf7ce8dcd626f02f7367a3a277f0f6fd53</originalsourceid><addsrcrecordid>eNp9kE9vEzEQxS1ERUPhGyDkI5ddxl7H3uWAhKKWIlXqBc6WY4-pw8YOtjeo3x6HFI4cRu8w782fHyFvGPQMmHy_64-hZCw9Bw49TD0weEZWbFRjt5ZCPicrAME7OY3TJXlZyg4A1JpPL8jlMEjGGFcrEjdpTpn6Vmm7Q1tpRpu-x1BDih_o7YLUREftQ057QwvG0jrHUB9piLQ-IHWIB-rR1KXdQpOnNsVjmpdT3sw04pL_SP2V8o_yilx4Mxd8_aRX5NvN9dfNbXd3__nL5tNdZ4XktTNstN44pUYwI2d-8oMQ3sn1wJTnYuuVxdFZJ7n0wL0apDKD4Up58NK79XBF3p3nHnL6uWCpeh-KxXk2EdNSNBcT5-MwMdGs4my1OZUG1OtDDnuTHzUDfSKtd_pMWp9Ia5h0I91ib582LNs9un-hv2ib4ePZgO3PY8Csiw0YLbrQEFftUvj_ht-kjZOk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2492283914</pqid></control><display><type>article</type><title>Color for object recognition: Hue and chroma sensitivity in the deep features of convolutional neural networks</title><source>MEDLINE</source><source>ScienceDirect Journals (5 years ago - present)</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Flachot, Alban ; Gegenfurtner, Karl R.</creator><creatorcontrib>Flachot, Alban ; Gegenfurtner, Karl R.</creatorcontrib><description>In this work, we examined the color tuning of units in the hidden layers of AlexNet, VGG-16 and VGG-19 convolutional neural networks and their relevance for the successful recognition of an object. We first selected the patches for which the units are maximally responsive among the 1.2 M images of the ImageNet training dataset. We segmented these patches using a k-means clustering algorithm on their chromatic distribution. Then we independently varied the color of these segments, both in hue and chroma, to measure the unit’s chromatic tuning. The models exhibited properties at times similar or opposed to the known chromatic processing of biological system. We found that, similarly to the most anterior occipital visual areas in primates, the last convolutional layer exhibited high color sensitivity. We also found the gradual emergence of single to double opponent kernels. Contrary to cells in the visual system, however, these kernels were selective for hues that gradually transit from being broadly distributed in early layers, to mainly falling along the blue-orange axis in late layers. In addition, we found that the classification performance of our models varies as we change the color of our stimuli following the models’ kernels properties. Performance was highest for colors the kernels maximally responded to, and images responsible for the activation of color sensitive kernels were more likely to be mis-classified as we changed their color. These observations were shared by all three networks, thus suggesting that they are general properties of current convolutional neural networks trained for object recognition.</description><identifier>ISSN: 0042-6989</identifier><identifier>EISSN: 1878-5646</identifier><identifier>DOI: 10.1016/j.visres.2020.09.010</identifier><identifier>PMID: 33611127</identifier><language>eng</language><publisher>England: Elsevier Ltd</publisher><subject>Algorithms ; Chroma responsivity ; Color ; Deep learning ; Feature visualization ; Hue selectitivity ; Neural Networks, Computer ; Object recognition ; Visual Perception</subject><ispartof>Vision research (Oxford), 2021-05, Vol.182, p.89-100</ispartof><rights>2021 Elsevier Ltd</rights><rights>Copyright © 2021 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c462t-a18cfad7780a821f9f344fd65317f24bf7ce8dcd626f02f7367a3a277f0f6fd53</citedby><cites>FETCH-LOGICAL-c462t-a18cfad7780a821f9f344fd65317f24bf7ce8dcd626f02f7367a3a277f0f6fd53</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.visres.2020.09.010$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,3548,27923,27924,45994</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33611127$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Flachot, Alban</creatorcontrib><creatorcontrib>Gegenfurtner, Karl R.</creatorcontrib><title>Color for object recognition: Hue and chroma sensitivity in the deep features of convolutional neural networks</title><title>Vision research (Oxford)</title><addtitle>Vision Res</addtitle><description>In this work, we examined the color tuning of units in the hidden layers of AlexNet, VGG-16 and VGG-19 convolutional neural networks and their relevance for the successful recognition of an object. We first selected the patches for which the units are maximally responsive among the 1.2 M images of the ImageNet training dataset. We segmented these patches using a k-means clustering algorithm on their chromatic distribution. Then we independently varied the color of these segments, both in hue and chroma, to measure the unit’s chromatic tuning. The models exhibited properties at times similar or opposed to the known chromatic processing of biological system. We found that, similarly to the most anterior occipital visual areas in primates, the last convolutional layer exhibited high color sensitivity. We also found the gradual emergence of single to double opponent kernels. Contrary to cells in the visual system, however, these kernels were selective for hues that gradually transit from being broadly distributed in early layers, to mainly falling along the blue-orange axis in late layers. In addition, we found that the classification performance of our models varies as we change the color of our stimuli following the models’ kernels properties. Performance was highest for colors the kernels maximally responded to, and images responsible for the activation of color sensitive kernels were more likely to be mis-classified as we changed their color. These observations were shared by all three networks, thus suggesting that they are general properties of current convolutional neural networks trained for object recognition.</description><subject>Algorithms</subject><subject>Chroma responsivity</subject><subject>Color</subject><subject>Deep learning</subject><subject>Feature visualization</subject><subject>Hue selectitivity</subject><subject>Neural Networks, Computer</subject><subject>Object recognition</subject><subject>Visual Perception</subject><issn>0042-6989</issn><issn>1878-5646</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kE9vEzEQxS1ERUPhGyDkI5ddxl7H3uWAhKKWIlXqBc6WY4-pw8YOtjeo3x6HFI4cRu8w782fHyFvGPQMmHy_64-hZCw9Bw49TD0weEZWbFRjt5ZCPicrAME7OY3TJXlZyg4A1JpPL8jlMEjGGFcrEjdpTpn6Vmm7Q1tpRpu-x1BDih_o7YLUREftQ057QwvG0jrHUB9piLQ-IHWIB-rR1KXdQpOnNsVjmpdT3sw04pL_SP2V8o_yilx4Mxd8_aRX5NvN9dfNbXd3__nL5tNdZ4XktTNstN44pUYwI2d-8oMQ3sn1wJTnYuuVxdFZJ7n0wL0apDKD4Up58NK79XBF3p3nHnL6uWCpeh-KxXk2EdNSNBcT5-MwMdGs4my1OZUG1OtDDnuTHzUDfSKtd_pMWp9Ia5h0I91ib582LNs9un-hv2ib4ePZgO3PY8Csiw0YLbrQEFftUvj_ht-kjZOk</recordid><startdate>20210501</startdate><enddate>20210501</enddate><creator>Flachot, Alban</creator><creator>Gegenfurtner, Karl R.</creator><general>Elsevier Ltd</general><scope>6I.</scope><scope>AAFTH</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>20210501</creationdate><title>Color for object recognition: Hue and chroma sensitivity in the deep features of convolutional neural networks</title><author>Flachot, Alban ; Gegenfurtner, Karl R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c462t-a18cfad7780a821f9f344fd65317f24bf7ce8dcd626f02f7367a3a277f0f6fd53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Chroma responsivity</topic><topic>Color</topic><topic>Deep learning</topic><topic>Feature visualization</topic><topic>Hue selectitivity</topic><topic>Neural Networks, Computer</topic><topic>Object recognition</topic><topic>Visual Perception</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Flachot, Alban</creatorcontrib><creatorcontrib>Gegenfurtner, Karl R.</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Vision research (Oxford)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Flachot, Alban</au><au>Gegenfurtner, Karl R.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Color for object recognition: Hue and chroma sensitivity in the deep features of convolutional neural networks</atitle><jtitle>Vision research (Oxford)</jtitle><addtitle>Vision Res</addtitle><date>2021-05-01</date><risdate>2021</risdate><volume>182</volume><spage>89</spage><epage>100</epage><pages>89-100</pages><issn>0042-6989</issn><eissn>1878-5646</eissn><abstract>In this work, we examined the color tuning of units in the hidden layers of AlexNet, VGG-16 and VGG-19 convolutional neural networks and their relevance for the successful recognition of an object. We first selected the patches for which the units are maximally responsive among the 1.2 M images of the ImageNet training dataset. We segmented these patches using a k-means clustering algorithm on their chromatic distribution. Then we independently varied the color of these segments, both in hue and chroma, to measure the unit’s chromatic tuning. The models exhibited properties at times similar or opposed to the known chromatic processing of biological system. We found that, similarly to the most anterior occipital visual areas in primates, the last convolutional layer exhibited high color sensitivity. We also found the gradual emergence of single to double opponent kernels. Contrary to cells in the visual system, however, these kernels were selective for hues that gradually transit from being broadly distributed in early layers, to mainly falling along the blue-orange axis in late layers. In addition, we found that the classification performance of our models varies as we change the color of our stimuli following the models’ kernels properties. Performance was highest for colors the kernels maximally responded to, and images responsible for the activation of color sensitive kernels were more likely to be mis-classified as we changed their color. These observations were shared by all three networks, thus suggesting that they are general properties of current convolutional neural networks trained for object recognition.</abstract><cop>England</cop><pub>Elsevier Ltd</pub><pmid>33611127</pmid><doi>10.1016/j.visres.2020.09.010</doi><tpages>12</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0042-6989
ispartof Vision research (Oxford), 2021-05, Vol.182, p.89-100
issn 0042-6989
1878-5646
language eng
recordid cdi_proquest_miscellaneous_2492283914
source MEDLINE; ScienceDirect Journals (5 years ago - present); EZB-FREE-00999 freely available EZB journals
subjects Algorithms
Chroma responsivity
Color
Deep learning
Feature visualization
Hue selectitivity
Neural Networks, Computer
Object recognition
Visual Perception
title Color for object recognition: Hue and chroma sensitivity in the deep features of convolutional neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T01%3A38%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Color%20for%20object%20recognition:%20Hue%20and%20chroma%20sensitivity%20in%20the%20deep%20features%20of%20convolutional%20neural%20networks&rft.jtitle=Vision%20research%20(Oxford)&rft.au=Flachot,%20Alban&rft.date=2021-05-01&rft.volume=182&rft.spage=89&rft.epage=100&rft.pages=89-100&rft.issn=0042-6989&rft.eissn=1878-5646&rft_id=info:doi/10.1016/j.visres.2020.09.010&rft_dat=%3Cproquest_cross%3E2492283914%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2492283914&rft_id=info:pmid/33611127&rft_els_id=S004269892100016X&rfr_iscdi=true