When less is more: Simplifying inputs aids neural network understanding
How do neural network image classifiers respond to simpler and simpler inputs? And what do such responses reveal about the learning process? To answer these questions, we need a clear measure of input simplicity (or inversely, complexity), an optimization objective that correlates with simplificatio...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-02 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Robin Tibor Schirrmeister Liu, Rosanne Hooker, Sara Ball, Tonio |
description | How do neural network image classifiers respond to simpler and simpler inputs? And what do such responses reveal about the learning process? To answer these questions, we need a clear measure of input simplicity (or inversely, complexity), an optimization objective that correlates with simplification, and a framework to incorporate such objective into training and inference. Lastly we need a variety of testbeds to experiment and evaluate the impact of such simplification on learning. In this work, we measure simplicity with the encoding bit size given by a pretrained generative model, and minimize the bit size to simplify inputs in training and inference. We investigate the effect of such simplification in several scenarios: conventional training, dataset condensation and post-hoc explanations. In all settings, inputs are simplified along with the original classification task, and we investigate the trade-off between input simplicity and task performance. For images with injected distractors, such simplification naturally removes superfluous information. For dataset condensation, we find that inputs can be simplified with almost no accuracy degradation. When used in post-hoc explanation, our learning-based simplification approach offers a valuable new tool to explore the basis of network decisions. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2620231096</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2620231096</sourcerecordid><originalsourceid>FETCH-proquest_journals_26202310963</originalsourceid><addsrcrecordid>eNqNyrsKwjAUgOEgCBbtOxxwLqQntl5W8bIrOJZAUk1Nk5qTIL69HXwAp2_4_wnLUIiy2KwQZywn6jjnWK-xqkTGTreHdmA1ERiC3ge9g4vpB2vaj3F3MG5IkUAaReB0CtKOxLcPT0hO6UBROjWOCzZtpSWd_5yz5fFw3Z-LIfhX0hSbzqfgxtRgjRxFybe1-O_6AifFPBI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2620231096</pqid></control><display><type>article</type><title>When less is more: Simplifying inputs aids neural network understanding</title><source>Free E- Journals</source><creator>Robin Tibor Schirrmeister ; Liu, Rosanne ; Hooker, Sara ; Ball, Tonio</creator><creatorcontrib>Robin Tibor Schirrmeister ; Liu, Rosanne ; Hooker, Sara ; Ball, Tonio</creatorcontrib><description>How do neural network image classifiers respond to simpler and simpler inputs? And what do such responses reveal about the learning process? To answer these questions, we need a clear measure of input simplicity (or inversely, complexity), an optimization objective that correlates with simplification, and a framework to incorporate such objective into training and inference. Lastly we need a variety of testbeds to experiment and evaluate the impact of such simplification on learning. In this work, we measure simplicity with the encoding bit size given by a pretrained generative model, and minimize the bit size to simplify inputs in training and inference. We investigate the effect of such simplification in several scenarios: conventional training, dataset condensation and post-hoc explanations. In all settings, inputs are simplified along with the original classification task, and we investigate the trade-off between input simplicity and task performance. For images with injected distractors, such simplification naturally removes superfluous information. For dataset condensation, we find that inputs can be simplified with almost no accuracy degradation. When used in post-hoc explanation, our learning-based simplification approach offers a valuable new tool to explore the basis of network decisions.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Inference ; Learning ; Neural networks ; Optimization ; Simplification ; Training</subject><ispartof>arXiv.org, 2022-02</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Robin Tibor Schirrmeister</creatorcontrib><creatorcontrib>Liu, Rosanne</creatorcontrib><creatorcontrib>Hooker, Sara</creatorcontrib><creatorcontrib>Ball, Tonio</creatorcontrib><title>When less is more: Simplifying inputs aids neural network understanding</title><title>arXiv.org</title><description>How do neural network image classifiers respond to simpler and simpler inputs? And what do such responses reveal about the learning process? To answer these questions, we need a clear measure of input simplicity (or inversely, complexity), an optimization objective that correlates with simplification, and a framework to incorporate such objective into training and inference. Lastly we need a variety of testbeds to experiment and evaluate the impact of such simplification on learning. In this work, we measure simplicity with the encoding bit size given by a pretrained generative model, and minimize the bit size to simplify inputs in training and inference. We investigate the effect of such simplification in several scenarios: conventional training, dataset condensation and post-hoc explanations. In all settings, inputs are simplified along with the original classification task, and we investigate the trade-off between input simplicity and task performance. For images with injected distractors, such simplification naturally removes superfluous information. For dataset condensation, we find that inputs can be simplified with almost no accuracy degradation. When used in post-hoc explanation, our learning-based simplification approach offers a valuable new tool to explore the basis of network decisions.</description><subject>Datasets</subject><subject>Inference</subject><subject>Learning</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Simplification</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyrsKwjAUgOEgCBbtOxxwLqQntl5W8bIrOJZAUk1Nk5qTIL69HXwAp2_4_wnLUIiy2KwQZywn6jjnWK-xqkTGTreHdmA1ERiC3ge9g4vpB2vaj3F3MG5IkUAaReB0CtKOxLcPT0hO6UBROjWOCzZtpSWd_5yz5fFw3Z-LIfhX0hSbzqfgxtRgjRxFybe1-O_6AifFPBI</recordid><startdate>20220201</startdate><enddate>20220201</enddate><creator>Robin Tibor Schirrmeister</creator><creator>Liu, Rosanne</creator><creator>Hooker, Sara</creator><creator>Ball, Tonio</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220201</creationdate><title>When less is more: Simplifying inputs aids neural network understanding</title><author>Robin Tibor Schirrmeister ; Liu, Rosanne ; Hooker, Sara ; Ball, Tonio</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26202310963</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Datasets</topic><topic>Inference</topic><topic>Learning</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Simplification</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Robin Tibor Schirrmeister</creatorcontrib><creatorcontrib>Liu, Rosanne</creatorcontrib><creatorcontrib>Hooker, Sara</creatorcontrib><creatorcontrib>Ball, Tonio</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Robin Tibor Schirrmeister</au><au>Liu, Rosanne</au><au>Hooker, Sara</au><au>Ball, Tonio</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>When less is more: Simplifying inputs aids neural network understanding</atitle><jtitle>arXiv.org</jtitle><date>2022-02-01</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>How do neural network image classifiers respond to simpler and simpler inputs? And what do such responses reveal about the learning process? To answer these questions, we need a clear measure of input simplicity (or inversely, complexity), an optimization objective that correlates with simplification, and a framework to incorporate such objective into training and inference. Lastly we need a variety of testbeds to experiment and evaluate the impact of such simplification on learning. In this work, we measure simplicity with the encoding bit size given by a pretrained generative model, and minimize the bit size to simplify inputs in training and inference. We investigate the effect of such simplification in several scenarios: conventional training, dataset condensation and post-hoc explanations. In all settings, inputs are simplified along with the original classification task, and we investigate the trade-off between input simplicity and task performance. For images with injected distractors, such simplification naturally removes superfluous information. For dataset condensation, we find that inputs can be simplified with almost no accuracy degradation. When used in post-hoc explanation, our learning-based simplification approach offers a valuable new tool to explore the basis of network decisions.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-02 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2620231096 |
source | Free E- Journals |
subjects | Datasets Inference Learning Neural networks Optimization Simplification Training |
title | When less is more: Simplifying inputs aids neural network understanding |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T11%3A33%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=When%20less%20is%20more:%20Simplifying%20inputs%20aids%20neural%20network%20understanding&rft.jtitle=arXiv.org&rft.au=Robin%20Tibor%20Schirrmeister&rft.date=2022-02-01&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2620231096%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2620231096&rft_id=info:pmid/&rfr_iscdi=true |