The minimum feature set problem
One approach to improving the generalization power of a neural net is to try to minimize the number of nonzero weights used. We examine two issues relevant to this approach, for single-layer nets. First we bound the VC dimension of the set of linear-threshold functions that have nonzero weights for...
Gespeichert in:
Veröffentlicht in: | Neural networks 1994, Vol.7 (3), p.491-494 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 494 |
---|---|
container_issue | 3 |
container_start_page | 491 |
container_title | Neural networks |
container_volume | 7 |
creator | Van Horn, Kevin S. Martinez, Tony R. |
description | One approach to improving the generalization power of a neural net is to try to minimize the number of nonzero weights used. We examine two issues relevant to this approach, for single-layer nets. First we bound the VC dimension of the set of linear-threshold functions that have nonzero weights for at most s of n inputs. Second, we show that the problem of minimizing the number of nonzero input weights used (without misclassifying training examples) is both NP-hard and difficult to approximate. |
doi_str_mv | 10.1016/0893-6080(94)90082-5 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_26450440</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>0893608094900825</els_id><sourcerecordid>26450440</sourcerecordid><originalsourceid>FETCH-LOGICAL-c366t-f2f8fac16ec6d1f68ddb68d3a72bf9e1fe865c4ce4213c3ee1c26c383f583c63</originalsourceid><addsrcrecordid>eNqFkE1Lw0AQhhdRsFb_gWBOoofofmWyexGk1A8oeMl9SSezuJI0dTcR_PemVjzqZebyvO8wD2Pngt8ILuCWG6ty4IZfWX1tOTcyLw7YTJjS5rI08pDNfpFjdpLSG-ccjFYzdlG9UtaFTejGLvNUD2OkLNGQbWO_bqk7ZUe-bhOd_ew5qx6W1eIpX708Pi_uVzkqgCH30htfowBCaIQH0zTraai6lGtvSXgyUKBG0lIoVEQCJaAyyhdGIag5u9zXTmffR0qD60JCatt6Q_2YnARdcK35v6AAW0qQegL1HsTYpxTJu20MXR0_neBuZ83tlLidEme1-7bmiil2t4_R9OxHoOgSBtogNSESDq7pw98FXxGGcsw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>16972624</pqid></control><display><type>article</type><title>The minimum feature set problem</title><source>ScienceDirect Journals (5 years ago - present)</source><creator>Van Horn, Kevin S. ; Martinez, Tony R.</creator><creatorcontrib>Van Horn, Kevin S. ; Martinez, Tony R.</creatorcontrib><description>One approach to improving the generalization power of a neural net is to try to minimize the number of nonzero weights used. We examine two issues relevant to this approach, for single-layer nets. First we bound the VC dimension of the set of linear-threshold functions that have nonzero weights for at most s of n inputs. Second, we show that the problem of minimizing the number of nonzero input weights used (without misclassifying training examples) is both NP-hard and difficult to approximate.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/0893-6080(94)90082-5</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Approximation algorithm ; Complexity ; Learning ; Linear threshold ; Minimization</subject><ispartof>Neural networks, 1994, Vol.7 (3), p.491-494</ispartof><rights>1994</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c366t-f2f8fac16ec6d1f68ddb68d3a72bf9e1fe865c4ce4213c3ee1c26c383f583c63</citedby><cites>FETCH-LOGICAL-c366t-f2f8fac16ec6d1f68ddb68d3a72bf9e1fe865c4ce4213c3ee1c26c383f583c63</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/0893-6080(94)90082-5$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,778,782,3539,4012,27910,27911,27912,45982</link.rule.ids></links><search><creatorcontrib>Van Horn, Kevin S.</creatorcontrib><creatorcontrib>Martinez, Tony R.</creatorcontrib><title>The minimum feature set problem</title><title>Neural networks</title><description>One approach to improving the generalization power of a neural net is to try to minimize the number of nonzero weights used. We examine two issues relevant to this approach, for single-layer nets. First we bound the VC dimension of the set of linear-threshold functions that have nonzero weights for at most s of n inputs. Second, we show that the problem of minimizing the number of nonzero input weights used (without misclassifying training examples) is both NP-hard and difficult to approximate.</description><subject>Approximation algorithm</subject><subject>Complexity</subject><subject>Learning</subject><subject>Linear threshold</subject><subject>Minimization</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1994</creationdate><recordtype>article</recordtype><recordid>eNqFkE1Lw0AQhhdRsFb_gWBOoofofmWyexGk1A8oeMl9SSezuJI0dTcR_PemVjzqZebyvO8wD2Pngt8ILuCWG6ty4IZfWX1tOTcyLw7YTJjS5rI08pDNfpFjdpLSG-ccjFYzdlG9UtaFTejGLvNUD2OkLNGQbWO_bqk7ZUe-bhOd_ew5qx6W1eIpX708Pi_uVzkqgCH30htfowBCaIQH0zTraai6lGtvSXgyUKBG0lIoVEQCJaAyyhdGIag5u9zXTmffR0qD60JCatt6Q_2YnARdcK35v6AAW0qQegL1HsTYpxTJu20MXR0_neBuZ83tlLidEme1-7bmiil2t4_R9OxHoOgSBtogNSESDq7pw98FXxGGcsw</recordid><startdate>1994</startdate><enddate>1994</enddate><creator>Van Horn, Kevin S.</creator><creator>Martinez, Tony R.</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7TK</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>1994</creationdate><title>The minimum feature set problem</title><author>Van Horn, Kevin S. ; Martinez, Tony R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c366t-f2f8fac16ec6d1f68ddb68d3a72bf9e1fe865c4ce4213c3ee1c26c383f583c63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1994</creationdate><topic>Approximation algorithm</topic><topic>Complexity</topic><topic>Learning</topic><topic>Linear threshold</topic><topic>Minimization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Van Horn, Kevin S.</creatorcontrib><creatorcontrib>Martinez, Tony R.</creatorcontrib><collection>CrossRef</collection><collection>Neurosciences Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Van Horn, Kevin S.</au><au>Martinez, Tony R.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The minimum feature set problem</atitle><jtitle>Neural networks</jtitle><date>1994</date><risdate>1994</risdate><volume>7</volume><issue>3</issue><spage>491</spage><epage>494</epage><pages>491-494</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>One approach to improving the generalization power of a neural net is to try to minimize the number of nonzero weights used. We examine two issues relevant to this approach, for single-layer nets. First we bound the VC dimension of the set of linear-threshold functions that have nonzero weights for at most s of n inputs. Second, we show that the problem of minimizing the number of nonzero input weights used (without misclassifying training examples) is both NP-hard and difficult to approximate.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/0893-6080(94)90082-5</doi><tpages>4</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0893-6080 |
ispartof | Neural networks, 1994, Vol.7 (3), p.491-494 |
issn | 0893-6080 1879-2782 |
language | eng |
recordid | cdi_proquest_miscellaneous_26450440 |
source | ScienceDirect Journals (5 years ago - present) |
subjects | Approximation algorithm Complexity Learning Linear threshold Minimization |
title | The minimum feature set problem |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T20%3A12%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20minimum%20feature%20set%20problem&rft.jtitle=Neural%20networks&rft.au=Van%20Horn,%20Kevin%20S.&rft.date=1994&rft.volume=7&rft.issue=3&rft.spage=491&rft.epage=494&rft.pages=491-494&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/0893-6080(94)90082-5&rft_dat=%3Cproquest_cross%3E26450440%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=16972624&rft_id=info:pmid/&rft_els_id=0893608094900825&rfr_iscdi=true |