Determining significance of input neurons for probabilistic neural network by sensitivity analysis procedure
In classical feedforward neural networks such as multilayer perceptron, radial basis function network, or counter‐propagation network, the neurons in the input layer correspond to features of the training patterns. The number of these features may be large, and their meaningfulness can be various. T...
Gespeichert in:
Veröffentlicht in: | Computational intelligence 2018-08, Vol.34 (3), p.895-916 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 916 |
---|---|
container_issue | 3 |
container_start_page | 895 |
container_title | Computational intelligence |
container_volume | 34 |
creator | Kowalski, Piotr A. Kusy, Maciej |
description | In classical feedforward neural networks such as multilayer perceptron, radial basis function network, or counter‐propagation network, the neurons in the input layer correspond to features of the training patterns. The number of these features may be large, and their meaningfulness can be various. Therefore, the selection of appropriate input neurons should be regarded. The aim of this paper is to present a complete step‐by‐step algorithm for determining the significance of particular input neurons of the probabilistic neural network (PNN). It is based on the sensitivity analysis procedure applied to a trained PNN. The proposed algorithm is utilized in the task of reduction of the input layer of the considered network, which is achieved by removing appropriately indicated features from the data set. For comparison purposes, the PNN's input neuron significance is established by using the ReliefF and variable importance procedures that provide the relevance of the input features in the data set. The performance of the reduced PNN is verified against a full structure network in classification problems using real benchmark data sets from an available machine learning repository. The achieved results are also referred to the ones attained by entropy‐based algorithms. The prediction ability expressed in terms of misclassifications is obtained by means of a 10‐fold cross‐validation procedure. Received outcomes point out interesting properties of the proposed algorithm. It is shown that the efficiency determined by all tested reduction methods is comparable. |
doi_str_mv | 10.1111/coin.12149 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2084742054</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2084742054</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3019-51c7535d82d5ba58ca7446dc7b2c77d785b3f15e8c2f220ce8e895784b8bb0093</originalsourceid><addsrcrecordid>eNp9kM1OwzAQhC0EEqVw4QkscUNKsR27do6o_FWq6AXOke041ZbULnZClbcnbTizlznsN7ujQeiWkhkd5sEG8DPKKC_O0ITyuczUnJNzNCGK8UwWubhEVyltCSE052qCmifXurgDD36DE2w81GC1tw6HGoPfdy32rovBJ1yHiPcxGG2ggdSCPW10M0h7CPELmx4n5xO08ANtj7XXTZ8gHU3WVV101-ii1k1yN386RZ8vzx-Lt2y1fl0uHleZzQktMkGtFLmoFKuE0UJZLTmfV1YaZqWspBImr6lwyrKaMWKdcqoQUnGjjCGkyKfobrw7fP7uXGrLbejiECeVjCguOSOCD9T9SNkYUoquLvcRdjr2JSXlsc3y2GZ5anOA6QgfoHH9P2S5WC_fR88v1ot50g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2084742054</pqid></control><display><type>article</type><title>Determining significance of input neurons for probabilistic neural network by sensitivity analysis procedure</title><source>Wiley-Blackwell Journals</source><source>EBSCOhost Business Source Complete</source><creator>Kowalski, Piotr A. ; Kusy, Maciej</creator><creatorcontrib>Kowalski, Piotr A. ; Kusy, Maciej</creatorcontrib><description>In classical feedforward neural networks such as multilayer perceptron, radial basis function network, or counter‐propagation network, the neurons in the input layer correspond to features of the training patterns. The number of these features may be large, and their meaningfulness can be various. Therefore, the selection of appropriate input neurons should be regarded. The aim of this paper is to present a complete step‐by‐step algorithm for determining the significance of particular input neurons of the probabilistic neural network (PNN). It is based on the sensitivity analysis procedure applied to a trained PNN. The proposed algorithm is utilized in the task of reduction of the input layer of the considered network, which is achieved by removing appropriately indicated features from the data set. For comparison purposes, the PNN's input neuron significance is established by using the ReliefF and variable importance procedures that provide the relevance of the input features in the data set. The performance of the reduced PNN is verified against a full structure network in classification problems using real benchmark data sets from an available machine learning repository. The achieved results are also referred to the ones attained by entropy‐based algorithms. The prediction ability expressed in terms of misclassifications is obtained by means of a 10‐fold cross‐validation procedure. Received outcomes point out interesting properties of the proposed algorithm. It is shown that the efficiency determined by all tested reduction methods is comparable.</description><identifier>ISSN: 0824-7935</identifier><identifier>EISSN: 1467-8640</identifier><identifier>DOI: 10.1111/coin.12149</identifier><language>eng</language><publisher>Hoboken: Blackwell Publishing Ltd</publisher><subject>Algorithms ; Artificial neural networks ; Basis functions ; feature selection ; input neuron significance ; Machine learning ; Multilayer perceptrons ; Neural networks ; Neurons ; prediction ability ; probabilistic neural network ; Radial basis function ; Reduction ; Sensitivity analysis</subject><ispartof>Computational intelligence, 2018-08, Vol.34 (3), p.895-916</ispartof><rights>2017 Wiley Periodicals, Inc.</rights><rights>2018 Wiley Periodicals, Inc.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3019-51c7535d82d5ba58ca7446dc7b2c77d785b3f15e8c2f220ce8e895784b8bb0093</citedby><cites>FETCH-LOGICAL-c3019-51c7535d82d5ba58ca7446dc7b2c77d785b3f15e8c2f220ce8e895784b8bb0093</cites><orcidid>0000-0003-4041-6900</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fcoin.12149$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fcoin.12149$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1416,27915,27916,45565,45566</link.rule.ids></links><search><creatorcontrib>Kowalski, Piotr A.</creatorcontrib><creatorcontrib>Kusy, Maciej</creatorcontrib><title>Determining significance of input neurons for probabilistic neural network by sensitivity analysis procedure</title><title>Computational intelligence</title><description>In classical feedforward neural networks such as multilayer perceptron, radial basis function network, or counter‐propagation network, the neurons in the input layer correspond to features of the training patterns. The number of these features may be large, and their meaningfulness can be various. Therefore, the selection of appropriate input neurons should be regarded. The aim of this paper is to present a complete step‐by‐step algorithm for determining the significance of particular input neurons of the probabilistic neural network (PNN). It is based on the sensitivity analysis procedure applied to a trained PNN. The proposed algorithm is utilized in the task of reduction of the input layer of the considered network, which is achieved by removing appropriately indicated features from the data set. For comparison purposes, the PNN's input neuron significance is established by using the ReliefF and variable importance procedures that provide the relevance of the input features in the data set. The performance of the reduced PNN is verified against a full structure network in classification problems using real benchmark data sets from an available machine learning repository. The achieved results are also referred to the ones attained by entropy‐based algorithms. The prediction ability expressed in terms of misclassifications is obtained by means of a 10‐fold cross‐validation procedure. Received outcomes point out interesting properties of the proposed algorithm. It is shown that the efficiency determined by all tested reduction methods is comparable.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Basis functions</subject><subject>feature selection</subject><subject>input neuron significance</subject><subject>Machine learning</subject><subject>Multilayer perceptrons</subject><subject>Neural networks</subject><subject>Neurons</subject><subject>prediction ability</subject><subject>probabilistic neural network</subject><subject>Radial basis function</subject><subject>Reduction</subject><subject>Sensitivity analysis</subject><issn>0824-7935</issn><issn>1467-8640</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNp9kM1OwzAQhC0EEqVw4QkscUNKsR27do6o_FWq6AXOke041ZbULnZClbcnbTizlznsN7ujQeiWkhkd5sEG8DPKKC_O0ITyuczUnJNzNCGK8UwWubhEVyltCSE052qCmifXurgDD36DE2w81GC1tw6HGoPfdy32rovBJ1yHiPcxGG2ggdSCPW10M0h7CPELmx4n5xO08ANtj7XXTZ8gHU3WVV101-ii1k1yN386RZ8vzx-Lt2y1fl0uHleZzQktMkGtFLmoFKuE0UJZLTmfV1YaZqWspBImr6lwyrKaMWKdcqoQUnGjjCGkyKfobrw7fP7uXGrLbejiECeVjCguOSOCD9T9SNkYUoquLvcRdjr2JSXlsc3y2GZ5anOA6QgfoHH9P2S5WC_fR88v1ot50g</recordid><startdate>201808</startdate><enddate>201808</enddate><creator>Kowalski, Piotr A.</creator><creator>Kusy, Maciej</creator><general>Blackwell Publishing Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-4041-6900</orcidid></search><sort><creationdate>201808</creationdate><title>Determining significance of input neurons for probabilistic neural network by sensitivity analysis procedure</title><author>Kowalski, Piotr A. ; Kusy, Maciej</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3019-51c7535d82d5ba58ca7446dc7b2c77d785b3f15e8c2f220ce8e895784b8bb0093</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Basis functions</topic><topic>feature selection</topic><topic>input neuron significance</topic><topic>Machine learning</topic><topic>Multilayer perceptrons</topic><topic>Neural networks</topic><topic>Neurons</topic><topic>prediction ability</topic><topic>probabilistic neural network</topic><topic>Radial basis function</topic><topic>Reduction</topic><topic>Sensitivity analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kowalski, Piotr A.</creatorcontrib><creatorcontrib>Kusy, Maciej</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Computational intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kowalski, Piotr A.</au><au>Kusy, Maciej</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Determining significance of input neurons for probabilistic neural network by sensitivity analysis procedure</atitle><jtitle>Computational intelligence</jtitle><date>2018-08</date><risdate>2018</risdate><volume>34</volume><issue>3</issue><spage>895</spage><epage>916</epage><pages>895-916</pages><issn>0824-7935</issn><eissn>1467-8640</eissn><abstract>In classical feedforward neural networks such as multilayer perceptron, radial basis function network, or counter‐propagation network, the neurons in the input layer correspond to features of the training patterns. The number of these features may be large, and their meaningfulness can be various. Therefore, the selection of appropriate input neurons should be regarded. The aim of this paper is to present a complete step‐by‐step algorithm for determining the significance of particular input neurons of the probabilistic neural network (PNN). It is based on the sensitivity analysis procedure applied to a trained PNN. The proposed algorithm is utilized in the task of reduction of the input layer of the considered network, which is achieved by removing appropriately indicated features from the data set. For comparison purposes, the PNN's input neuron significance is established by using the ReliefF and variable importance procedures that provide the relevance of the input features in the data set. The performance of the reduced PNN is verified against a full structure network in classification problems using real benchmark data sets from an available machine learning repository. The achieved results are also referred to the ones attained by entropy‐based algorithms. The prediction ability expressed in terms of misclassifications is obtained by means of a 10‐fold cross‐validation procedure. Received outcomes point out interesting properties of the proposed algorithm. It is shown that the efficiency determined by all tested reduction methods is comparable.</abstract><cop>Hoboken</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1111/coin.12149</doi><tpages>22</tpages><orcidid>https://orcid.org/0000-0003-4041-6900</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0824-7935 |
ispartof | Computational intelligence, 2018-08, Vol.34 (3), p.895-916 |
issn | 0824-7935 1467-8640 |
language | eng |
recordid | cdi_proquest_journals_2084742054 |
source | Wiley-Blackwell Journals; EBSCOhost Business Source Complete |
subjects | Algorithms Artificial neural networks Basis functions feature selection input neuron significance Machine learning Multilayer perceptrons Neural networks Neurons prediction ability probabilistic neural network Radial basis function Reduction Sensitivity analysis |
title | Determining significance of input neurons for probabilistic neural network by sensitivity analysis procedure |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T18%3A31%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Determining%20significance%20of%20input%20neurons%20for%20probabilistic%20neural%20network%20by%20sensitivity%20analysis%20procedure&rft.jtitle=Computational%20intelligence&rft.au=Kowalski,%20Piotr%20A.&rft.date=2018-08&rft.volume=34&rft.issue=3&rft.spage=895&rft.epage=916&rft.pages=895-916&rft.issn=0824-7935&rft.eissn=1467-8640&rft_id=info:doi/10.1111/coin.12149&rft_dat=%3Cproquest_cross%3E2084742054%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2084742054&rft_id=info:pmid/&rfr_iscdi=true |