Contrast sensitivity function in deep networks

The contrast sensitivity function (CSF) is a fundamental signature of the visual system that has been measured extensively in several species. It is defined by the visibility threshold for sinusoidal gratings at all spatial frequencies. Here, we investigated the CSF in deep neural networks using the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2023-07, Vol.164, p.228-244
Hauptverfasser: Akbarinia, Arash, Morgenstern, Yaniv, Gegenfurtner, Karl R.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 244
container_issue
container_start_page 228
container_title Neural networks
container_volume 164
creator Akbarinia, Arash
Morgenstern, Yaniv
Gegenfurtner, Karl R.
description The contrast sensitivity function (CSF) is a fundamental signature of the visual system that has been measured extensively in several species. It is defined by the visibility threshold for sinusoidal gratings at all spatial frequencies. Here, we investigated the CSF in deep neural networks using the same 2AFC contrast detection paradigm as in human psychophysics. We examined 240 networks pretrained on several tasks. To obtain their corresponding CSFs, we trained a linear classifier on top of the extracted features from frozen pretrained networks. The linear classifier is exclusively trained on a contrast discrimination task with natural images. It has to find which of the two input images has higher contrast. The network’s CSF is measured by detecting which one of two images contains a sinusoidal grating of varying orientation and spatial frequency. Our results demonstrate characteristics of the human CSF are manifested in deep networks both in the luminance channel (a band-limited inverted U-shaped function) and in the chromatic channels (two low-pass functions of similar properties). The exact shape of the networks’ CSF appears to be task-dependent. The human CSF is better captured by networks trained on low-level visual tasks such as image-denoising or autoencoding. However, human-like CSF also emerges in mid- and high-level tasks such as edge detection and object recognition. Our analysis shows that human-like CSF appears in all architectures but at different depths of processing, some at early layers, while others in intermediate and final layers. Overall, these results suggest that (i) deep networks model the human CSF faithfully, making them suitable candidates for applications of image quality and compression, (ii) efficient/purposeful processing of the natural world drives the CSF shape, and (iii) visual representation from all levels of visual hierarchy contribute to the tuning curve of the CSF, in turn implying a function which we intuitively think of as modulated by low-level visual features may arise as a consequence of pooling from a larger set of neurons at all levels of the visual system. •Contrast sensitivity function (CSF), elemental in biological vision, emerges in DNNs.•The visual task that a network is trained to perform critically shapes its CSF.•Low-level tasks capture the human CSF best, but it is also present in high-level tasks.•Human-like CSF appears at several depths of visual features, from early to late layers.
doi_str_mv 10.1016/j.neunet.2023.04.032
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2811566547</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608023002186</els_id><sourcerecordid>2811566547</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-6b7f5b539e879a0cd1ef325d32f06aca6eb72fc131a638f44e366c10a376bf5b3</originalsourceid><addsrcrecordid>eNp9kD1PwzAQhi0EoqXwDxDKyJJwthPHXZBQxZdUiQVmy3EukkvrFNsp6r_HVQoj0y3Pe-_dQ8g1hYICFXerwuHgMBYMGC-gLICzEzKlsp7nrJbslExBznkuQMKEXISwAgAhS35OJrymlWC0npJi0bvodYhZQBdstDsb91k3OBNt7zLrshZxm6We795_hkty1ul1wKvjnJGPp8f3xUu-fHt-XTwsc1OCjLlo6q5qKj7HdI0G01LsOKtazjoQ2miBTc06QznVgsuuLJELYShoXosmJfmM3I57t77_GjBEtbHB4HqtHfZDUEzS9IGoyjqh5Yga34fgsVNbbzfa7xUFdTClVmo0pQ6mFJQqmUqxm2PD0Gyw_Qv9qknA_Qhg-nNn0atgLDqDrfVoomp7-3_DD3kPe7Y</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2811566547</pqid></control><display><type>article</type><title>Contrast sensitivity function in deep networks</title><source>MEDLINE</source><source>ScienceDirect Journals (5 years ago - present)</source><creator>Akbarinia, Arash ; Morgenstern, Yaniv ; Gegenfurtner, Karl R.</creator><creatorcontrib>Akbarinia, Arash ; Morgenstern, Yaniv ; Gegenfurtner, Karl R.</creatorcontrib><description>The contrast sensitivity function (CSF) is a fundamental signature of the visual system that has been measured extensively in several species. It is defined by the visibility threshold for sinusoidal gratings at all spatial frequencies. Here, we investigated the CSF in deep neural networks using the same 2AFC contrast detection paradigm as in human psychophysics. We examined 240 networks pretrained on several tasks. To obtain their corresponding CSFs, we trained a linear classifier on top of the extracted features from frozen pretrained networks. The linear classifier is exclusively trained on a contrast discrimination task with natural images. It has to find which of the two input images has higher contrast. The network’s CSF is measured by detecting which one of two images contains a sinusoidal grating of varying orientation and spatial frequency. Our results demonstrate characteristics of the human CSF are manifested in deep networks both in the luminance channel (a band-limited inverted U-shaped function) and in the chromatic channels (two low-pass functions of similar properties). The exact shape of the networks’ CSF appears to be task-dependent. The human CSF is better captured by networks trained on low-level visual tasks such as image-denoising or autoencoding. However, human-like CSF also emerges in mid- and high-level tasks such as edge detection and object recognition. Our analysis shows that human-like CSF appears in all architectures but at different depths of processing, some at early layers, while others in intermediate and final layers. Overall, these results suggest that (i) deep networks model the human CSF faithfully, making them suitable candidates for applications of image quality and compression, (ii) efficient/purposeful processing of the natural world drives the CSF shape, and (iii) visual representation from all levels of visual hierarchy contribute to the tuning curve of the CSF, in turn implying a function which we intuitively think of as modulated by low-level visual features may arise as a consequence of pooling from a larger set of neurons at all levels of the visual system. •Contrast sensitivity function (CSF), elemental in biological vision, emerges in DNNs.•The visual task that a network is trained to perform critically shapes its CSF.•Low-level tasks capture the human CSF best, but it is also present in high-level tasks.•Human-like CSF appears at several depths of visual features, from early to late layers.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2023.04.032</identifier><identifier>PMID: 37156217</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Artificial neural networks ; Contrast ; Contrast Sensitivity ; CSF ; Deep learning ; Humans ; Neural Networks, Computer ; Neurons - physiology ; Pattern Recognition, Visual - physiology ; Psychophysics ; Visual features ; Visual perception ; Visual Perception - physiology</subject><ispartof>Neural networks, 2023-07, Vol.164, p.228-244</ispartof><rights>2023 Elsevier Ltd</rights><rights>Copyright © 2023 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-6b7f5b539e879a0cd1ef325d32f06aca6eb72fc131a638f44e366c10a376bf5b3</citedby><cites>FETCH-LOGICAL-c408t-6b7f5b539e879a0cd1ef325d32f06aca6eb72fc131a638f44e366c10a376bf5b3</cites><orcidid>0000-0002-4249-231X ; 0000-0001-6902-667X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.neunet.2023.04.032$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3548,27923,27924,45994</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37156217$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Akbarinia, Arash</creatorcontrib><creatorcontrib>Morgenstern, Yaniv</creatorcontrib><creatorcontrib>Gegenfurtner, Karl R.</creatorcontrib><title>Contrast sensitivity function in deep networks</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>The contrast sensitivity function (CSF) is a fundamental signature of the visual system that has been measured extensively in several species. It is defined by the visibility threshold for sinusoidal gratings at all spatial frequencies. Here, we investigated the CSF in deep neural networks using the same 2AFC contrast detection paradigm as in human psychophysics. We examined 240 networks pretrained on several tasks. To obtain their corresponding CSFs, we trained a linear classifier on top of the extracted features from frozen pretrained networks. The linear classifier is exclusively trained on a contrast discrimination task with natural images. It has to find which of the two input images has higher contrast. The network’s CSF is measured by detecting which one of two images contains a sinusoidal grating of varying orientation and spatial frequency. Our results demonstrate characteristics of the human CSF are manifested in deep networks both in the luminance channel (a band-limited inverted U-shaped function) and in the chromatic channels (two low-pass functions of similar properties). The exact shape of the networks’ CSF appears to be task-dependent. The human CSF is better captured by networks trained on low-level visual tasks such as image-denoising or autoencoding. However, human-like CSF also emerges in mid- and high-level tasks such as edge detection and object recognition. Our analysis shows that human-like CSF appears in all architectures but at different depths of processing, some at early layers, while others in intermediate and final layers. Overall, these results suggest that (i) deep networks model the human CSF faithfully, making them suitable candidates for applications of image quality and compression, (ii) efficient/purposeful processing of the natural world drives the CSF shape, and (iii) visual representation from all levels of visual hierarchy contribute to the tuning curve of the CSF, in turn implying a function which we intuitively think of as modulated by low-level visual features may arise as a consequence of pooling from a larger set of neurons at all levels of the visual system. •Contrast sensitivity function (CSF), elemental in biological vision, emerges in DNNs.•The visual task that a network is trained to perform critically shapes its CSF.•Low-level tasks capture the human CSF best, but it is also present in high-level tasks.•Human-like CSF appears at several depths of visual features, from early to late layers.</description><subject>Artificial neural networks</subject><subject>Contrast</subject><subject>Contrast Sensitivity</subject><subject>CSF</subject><subject>Deep learning</subject><subject>Humans</subject><subject>Neural Networks, Computer</subject><subject>Neurons - physiology</subject><subject>Pattern Recognition, Visual - physiology</subject><subject>Psychophysics</subject><subject>Visual features</subject><subject>Visual perception</subject><subject>Visual Perception - physiology</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kD1PwzAQhi0EoqXwDxDKyJJwthPHXZBQxZdUiQVmy3EukkvrFNsp6r_HVQoj0y3Pe-_dQ8g1hYICFXerwuHgMBYMGC-gLICzEzKlsp7nrJbslExBznkuQMKEXISwAgAhS35OJrymlWC0npJi0bvodYhZQBdstDsb91k3OBNt7zLrshZxm6We795_hkty1ul1wKvjnJGPp8f3xUu-fHt-XTwsc1OCjLlo6q5qKj7HdI0G01LsOKtazjoQ2miBTc06QznVgsuuLJELYShoXosmJfmM3I57t77_GjBEtbHB4HqtHfZDUEzS9IGoyjqh5Yga34fgsVNbbzfa7xUFdTClVmo0pQ6mFJQqmUqxm2PD0Gyw_Qv9qknA_Qhg-nNn0atgLDqDrfVoomp7-3_DD3kPe7Y</recordid><startdate>202307</startdate><enddate>202307</enddate><creator>Akbarinia, Arash</creator><creator>Morgenstern, Yaniv</creator><creator>Gegenfurtner, Karl R.</creator><general>Elsevier Ltd</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-4249-231X</orcidid><orcidid>https://orcid.org/0000-0001-6902-667X</orcidid></search><sort><creationdate>202307</creationdate><title>Contrast sensitivity function in deep networks</title><author>Akbarinia, Arash ; Morgenstern, Yaniv ; Gegenfurtner, Karl R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-6b7f5b539e879a0cd1ef325d32f06aca6eb72fc131a638f44e366c10a376bf5b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>Contrast</topic><topic>Contrast Sensitivity</topic><topic>CSF</topic><topic>Deep learning</topic><topic>Humans</topic><topic>Neural Networks, Computer</topic><topic>Neurons - physiology</topic><topic>Pattern Recognition, Visual - physiology</topic><topic>Psychophysics</topic><topic>Visual features</topic><topic>Visual perception</topic><topic>Visual Perception - physiology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Akbarinia, Arash</creatorcontrib><creatorcontrib>Morgenstern, Yaniv</creatorcontrib><creatorcontrib>Gegenfurtner, Karl R.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Akbarinia, Arash</au><au>Morgenstern, Yaniv</au><au>Gegenfurtner, Karl R.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Contrast sensitivity function in deep networks</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2023-07</date><risdate>2023</risdate><volume>164</volume><spage>228</spage><epage>244</epage><pages>228-244</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>The contrast sensitivity function (CSF) is a fundamental signature of the visual system that has been measured extensively in several species. It is defined by the visibility threshold for sinusoidal gratings at all spatial frequencies. Here, we investigated the CSF in deep neural networks using the same 2AFC contrast detection paradigm as in human psychophysics. We examined 240 networks pretrained on several tasks. To obtain their corresponding CSFs, we trained a linear classifier on top of the extracted features from frozen pretrained networks. The linear classifier is exclusively trained on a contrast discrimination task with natural images. It has to find which of the two input images has higher contrast. The network’s CSF is measured by detecting which one of two images contains a sinusoidal grating of varying orientation and spatial frequency. Our results demonstrate characteristics of the human CSF are manifested in deep networks both in the luminance channel (a band-limited inverted U-shaped function) and in the chromatic channels (two low-pass functions of similar properties). The exact shape of the networks’ CSF appears to be task-dependent. The human CSF is better captured by networks trained on low-level visual tasks such as image-denoising or autoencoding. However, human-like CSF also emerges in mid- and high-level tasks such as edge detection and object recognition. Our analysis shows that human-like CSF appears in all architectures but at different depths of processing, some at early layers, while others in intermediate and final layers. Overall, these results suggest that (i) deep networks model the human CSF faithfully, making them suitable candidates for applications of image quality and compression, (ii) efficient/purposeful processing of the natural world drives the CSF shape, and (iii) visual representation from all levels of visual hierarchy contribute to the tuning curve of the CSF, in turn implying a function which we intuitively think of as modulated by low-level visual features may arise as a consequence of pooling from a larger set of neurons at all levels of the visual system. •Contrast sensitivity function (CSF), elemental in biological vision, emerges in DNNs.•The visual task that a network is trained to perform critically shapes its CSF.•Low-level tasks capture the human CSF best, but it is also present in high-level tasks.•Human-like CSF appears at several depths of visual features, from early to late layers.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>37156217</pmid><doi>10.1016/j.neunet.2023.04.032</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0002-4249-231X</orcidid><orcidid>https://orcid.org/0000-0001-6902-667X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2023-07, Vol.164, p.228-244
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_2811566547
source MEDLINE; ScienceDirect Journals (5 years ago - present)
subjects Artificial neural networks
Contrast
Contrast Sensitivity
CSF
Deep learning
Humans
Neural Networks, Computer
Neurons - physiology
Pattern Recognition, Visual - physiology
Psychophysics
Visual features
Visual perception
Visual Perception - physiology
title Contrast sensitivity function in deep networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T21%3A09%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Contrast%20sensitivity%20function%20in%20deep%20networks&rft.jtitle=Neural%20networks&rft.au=Akbarinia,%20Arash&rft.date=2023-07&rft.volume=164&rft.spage=228&rft.epage=244&rft.pages=228-244&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2023.04.032&rft_dat=%3Cproquest_cross%3E2811566547%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2811566547&rft_id=info:pmid/37156217&rft_els_id=S0893608023002186&rfr_iscdi=true