The computational intractability of training sigmoidal neural networks

We demonstrate that the problem of approximately interpolating a target function by a neural network is computationally intractable. In particular the interpolation training problem for a neural network with two monotone Lipschitzian sigmoidal internal activation functions and one linear output node...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information theory 1997-01, Vol.43 (1), p.167-173
1. Verfasser: Jones, L.K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 173
container_issue 1
container_start_page 167
container_title IEEE transactions on information theory
container_volume 43
creator Jones, L.K.
description We demonstrate that the problem of approximately interpolating a target function by a neural network is computationally intractable. In particular the interpolation training problem for a neural network with two monotone Lipschitzian sigmoidal internal activation functions and one linear output node is shown to be NP-hard and NP-complete if the internal nodes are in addition piecewise ratios of polynomials. This partially answers a question of Blum and Rivest (1992) concerning the NP-completeness of training a logistic sigmoidal 3-node network. An extension of the result is then given for networks with n monotone sigmoidal internal nodes and one convex output node. This indicates that many multivariate nonlinear regression problems may be computationally infeasible.
doi_str_mv 10.1109/18.567673
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_195904498</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>567673</ieee_id><sourcerecordid>11081725</sourcerecordid><originalsourceid>FETCH-LOGICAL-c304t-7e80ac6048267c970f93cdadd9624e2a18848cc742d8aa62ae7c0e991a8c59953</originalsourceid><addsrcrecordid>eNpd0M9LwzAUB_AgCtbpwaun4kHw0Jmk-XmU4VQYeJnnENN0ZrbNTFJk_73RDg-eHl_ehwfvC8AlgnOEoLxDYk4ZZ7w-AgWilFeSUXIMCgiRqCQh4hScxbjNkVCEC7Bcv9vS-H43Jp2cH3RXuiEFbZJ-c51L-9K3Zc5ucMOmjG7Te9dkNNgx_I705cNHPAcnre6ivTjMGXhdPqwXT9Xq5fF5cb-qTA1JqrgVUBsGicCMG8lhK2vT6KaRDBOLNRKCCGM4wY3QmmFtuYFWSqSFoVLSegZupru74D9HG5PqXTS26_Rg_RgVFowLVJMMr__BrR9Dfi8qJKmEhEiR0e2ETPAxBtuqXXC9DnuFoPqpUyGhpjqzvZqss9b-ucPyGxumb60</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>195904498</pqid></control><display><type>article</type><title>The computational intractability of training sigmoidal neural networks</title><source>IEEE Electronic Library (IEL)</source><creator>Jones, L.K.</creator><creatorcontrib>Jones, L.K.</creatorcontrib><description>We demonstrate that the problem of approximately interpolating a target function by a neural network is computationally intractable. In particular the interpolation training problem for a neural network with two monotone Lipschitzian sigmoidal internal activation functions and one linear output node is shown to be NP-hard and NP-complete if the internal nodes are in addition piecewise ratios of polynomials. This partially answers a question of Blum and Rivest (1992) concerning the NP-completeness of training a logistic sigmoidal 3-node network. An extension of the result is then given for networks with n monotone sigmoidal internal nodes and one convex output node. This indicates that many multivariate nonlinear regression problems may be computationally infeasible.</description><identifier>ISSN: 0018-9448</identifier><identifier>EISSN: 1557-9654</identifier><identifier>DOI: 10.1109/18.567673</identifier><identifier>CODEN: IETTAW</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Computer networks ; Feedforward neural networks ; Information technology ; Interpolation ; Logistics ; Neural networks ; Polynomials ; Search problems ; Vectors</subject><ispartof>IEEE transactions on information theory, 1997-01, Vol.43 (1), p.167-173</ispartof><rights>Copyright Institute of Electrical and Electronics Engineers, Inc. (IEEE) Jan 1997</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c304t-7e80ac6048267c970f93cdadd9624e2a18848cc742d8aa62ae7c0e991a8c59953</citedby><cites>FETCH-LOGICAL-c304t-7e80ac6048267c970f93cdadd9624e2a18848cc742d8aa62ae7c0e991a8c59953</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/567673$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,778,782,794,27907,27908,54741</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/567673$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Jones, L.K.</creatorcontrib><title>The computational intractability of training sigmoidal neural networks</title><title>IEEE transactions on information theory</title><addtitle>TIT</addtitle><description>We demonstrate that the problem of approximately interpolating a target function by a neural network is computationally intractable. In particular the interpolation training problem for a neural network with two monotone Lipschitzian sigmoidal internal activation functions and one linear output node is shown to be NP-hard and NP-complete if the internal nodes are in addition piecewise ratios of polynomials. This partially answers a question of Blum and Rivest (1992) concerning the NP-completeness of training a logistic sigmoidal 3-node network. An extension of the result is then given for networks with n monotone sigmoidal internal nodes and one convex output node. This indicates that many multivariate nonlinear regression problems may be computationally infeasible.</description><subject>Computer networks</subject><subject>Feedforward neural networks</subject><subject>Information technology</subject><subject>Interpolation</subject><subject>Logistics</subject><subject>Neural networks</subject><subject>Polynomials</subject><subject>Search problems</subject><subject>Vectors</subject><issn>0018-9448</issn><issn>1557-9654</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1997</creationdate><recordtype>article</recordtype><recordid>eNpd0M9LwzAUB_AgCtbpwaun4kHw0Jmk-XmU4VQYeJnnENN0ZrbNTFJk_73RDg-eHl_ehwfvC8AlgnOEoLxDYk4ZZ7w-AgWilFeSUXIMCgiRqCQh4hScxbjNkVCEC7Bcv9vS-H43Jp2cH3RXuiEFbZJ-c51L-9K3Zc5ucMOmjG7Te9dkNNgx_I705cNHPAcnre6ivTjMGXhdPqwXT9Xq5fF5cb-qTA1JqrgVUBsGicCMG8lhK2vT6KaRDBOLNRKCCGM4wY3QmmFtuYFWSqSFoVLSegZupru74D9HG5PqXTS26_Rg_RgVFowLVJMMr__BrR9Dfi8qJKmEhEiR0e2ETPAxBtuqXXC9DnuFoPqpUyGhpjqzvZqss9b-ucPyGxumb60</recordid><startdate>199701</startdate><enddate>199701</enddate><creator>Jones, L.K.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>199701</creationdate><title>The computational intractability of training sigmoidal neural networks</title><author>Jones, L.K.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c304t-7e80ac6048267c970f93cdadd9624e2a18848cc742d8aa62ae7c0e991a8c59953</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1997</creationdate><topic>Computer networks</topic><topic>Feedforward neural networks</topic><topic>Information technology</topic><topic>Interpolation</topic><topic>Logistics</topic><topic>Neural networks</topic><topic>Polynomials</topic><topic>Search problems</topic><topic>Vectors</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jones, L.K.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on information theory</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jones, L.K.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The computational intractability of training sigmoidal neural networks</atitle><jtitle>IEEE transactions on information theory</jtitle><stitle>TIT</stitle><date>1997-01</date><risdate>1997</risdate><volume>43</volume><issue>1</issue><spage>167</spage><epage>173</epage><pages>167-173</pages><issn>0018-9448</issn><eissn>1557-9654</eissn><coden>IETTAW</coden><abstract>We demonstrate that the problem of approximately interpolating a target function by a neural network is computationally intractable. In particular the interpolation training problem for a neural network with two monotone Lipschitzian sigmoidal internal activation functions and one linear output node is shown to be NP-hard and NP-complete if the internal nodes are in addition piecewise ratios of polynomials. This partially answers a question of Blum and Rivest (1992) concerning the NP-completeness of training a logistic sigmoidal 3-node network. An extension of the result is then given for networks with n monotone sigmoidal internal nodes and one convex output node. This indicates that many multivariate nonlinear regression problems may be computationally infeasible.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/18.567673</doi><tpages>7</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0018-9448
ispartof IEEE transactions on information theory, 1997-01, Vol.43 (1), p.167-173
issn 0018-9448
1557-9654
language eng
recordid cdi_proquest_journals_195904498
source IEEE Electronic Library (IEL)
subjects Computer networks
Feedforward neural networks
Information technology
Interpolation
Logistics
Neural networks
Polynomials
Search problems
Vectors
title The computational intractability of training sigmoidal neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T08%3A27%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20computational%20intractability%20of%20training%20sigmoidal%20neural%20networks&rft.jtitle=IEEE%20transactions%20on%20information%20theory&rft.au=Jones,%20L.K.&rft.date=1997-01&rft.volume=43&rft.issue=1&rft.spage=167&rft.epage=173&rft.pages=167-173&rft.issn=0018-9448&rft.eissn=1557-9654&rft.coden=IETTAW&rft_id=info:doi/10.1109/18.567673&rft_dat=%3Cproquest_RIE%3E11081725%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=195904498&rft_id=info:pmid/&rft_ieee_id=567673&rfr_iscdi=true