Discretization methods for encoding of continuous input variables for Boolean neural networks
RAM-based neural networks are normally based on binary input variables, and a thermometer code or a so-called CMAC-Gray code is most often used when encoding real-valued variables. The number of intervals and interval boundaries are normally set from ad hoc principles. Using this approach many inter...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1224 vol.2 |
---|---|
container_issue | |
container_start_page | 1219 |
container_title | |
container_volume | 2 |
creator | Linneberg, C. Jorgensen, T.M. |
description | RAM-based neural networks are normally based on binary input variables, and a thermometer code or a so-called CMAC-Gray code is most often used when encoding real-valued variables. The number of intervals and interval boundaries are normally set from ad hoc principles. Using this approach many intervals are normally needed to provide sufficient resolution. This leads to large variable codes, which again complicates the learning problem. Instead of selecting more or less arbitrary interval boundaries if can be expected to be beneficial to use discretization techniques, where the split-values are selected from the use of information measures. We report on the results that can be obtained by applying local and global discretization techniques together with enhanced schemes of the so-called n-tuple classifier which is the most simple type of a RAM neural net. The enhanced n-tuple nets have proven competitive on a large set of benchmark data sets. By making proper use of the discretization boundaries increased performances can be obtained. The local discretization algorithms are closely connected with the learning principle used for decision trees, and we show how such schemes can be used as variable selectors for the RAM based neural nets. |
doi_str_mv | 10.1109/IJCNN.1999.831134 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_831134</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>831134</ieee_id><sourcerecordid>831134</sourcerecordid><originalsourceid>FETCH-ieee_primary_8311343</originalsourceid><addsrcrecordid>eNp9jsFOAjEUABuVBFA-QE79gV1fKd3dXgUMeODk1ZC6vNWnSx9puxD4eknw7GkOM4cR4lFBrhTYp9XrbL3OlbU2r7RSenojBsqYKtMWJrdiCGUF2piJLe4uAmyVlaYs-mIY4zdAAeXUDsT7nGIdMNHZJWIvd5i-eBtlw0Gir3lL_lNyI2v2iXzHXZTk912SBxfIfbR4TZ-ZW3ReeuyCay9IRw4_8UH0GtdGHP3xXoxfFm-zZUaIuNkH2rlw2lzv9b_yF7TzR4w</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Discretization methods for encoding of continuous input variables for Boolean neural networks</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Linneberg, C. ; Jorgensen, T.M.</creator><creatorcontrib>Linneberg, C. ; Jorgensen, T.M.</creatorcontrib><description>RAM-based neural networks are normally based on binary input variables, and a thermometer code or a so-called CMAC-Gray code is most often used when encoding real-valued variables. The number of intervals and interval boundaries are normally set from ad hoc principles. Using this approach many intervals are normally needed to provide sufficient resolution. This leads to large variable codes, which again complicates the learning problem. Instead of selecting more or less arbitrary interval boundaries if can be expected to be beneficial to use discretization techniques, where the split-values are selected from the use of information measures. We report on the results that can be obtained by applying local and global discretization techniques together with enhanced schemes of the so-called n-tuple classifier which is the most simple type of a RAM neural net. The enhanced n-tuple nets have proven competitive on a large set of benchmark data sets. By making proper use of the discretization boundaries increased performances can be obtained. The local discretization algorithms are closely connected with the learning principle used for decision trees, and we show how such schemes can be used as variable selectors for the RAM based neural nets.</description><identifier>ISSN: 1098-7576</identifier><identifier>ISBN: 0780355296</identifier><identifier>ISBN: 9780780355293</identifier><identifier>EISSN: 1558-3902</identifier><identifier>DOI: 10.1109/IJCNN.1999.831134</identifier><language>eng</language><publisher>IEEE</publisher><subject>Binary codes ; Classification tree analysis ; Decision trees ; Encoding ; Frequency ; Input variables ; Laboratories ; Neural networks ; Ribs ; Sampling methods</subject><ispartof>IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339), 1999, Vol.2, p.1219-1224 vol.2</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/831134$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2058,4050,4051,27925,54920</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/831134$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Linneberg, C.</creatorcontrib><creatorcontrib>Jorgensen, T.M.</creatorcontrib><title>Discretization methods for encoding of continuous input variables for Boolean neural networks</title><title>IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)</title><addtitle>IJCNN</addtitle><description>RAM-based neural networks are normally based on binary input variables, and a thermometer code or a so-called CMAC-Gray code is most often used when encoding real-valued variables. The number of intervals and interval boundaries are normally set from ad hoc principles. Using this approach many intervals are normally needed to provide sufficient resolution. This leads to large variable codes, which again complicates the learning problem. Instead of selecting more or less arbitrary interval boundaries if can be expected to be beneficial to use discretization techniques, where the split-values are selected from the use of information measures. We report on the results that can be obtained by applying local and global discretization techniques together with enhanced schemes of the so-called n-tuple classifier which is the most simple type of a RAM neural net. The enhanced n-tuple nets have proven competitive on a large set of benchmark data sets. By making proper use of the discretization boundaries increased performances can be obtained. The local discretization algorithms are closely connected with the learning principle used for decision trees, and we show how such schemes can be used as variable selectors for the RAM based neural nets.</description><subject>Binary codes</subject><subject>Classification tree analysis</subject><subject>Decision trees</subject><subject>Encoding</subject><subject>Frequency</subject><subject>Input variables</subject><subject>Laboratories</subject><subject>Neural networks</subject><subject>Ribs</subject><subject>Sampling methods</subject><issn>1098-7576</issn><issn>1558-3902</issn><isbn>0780355296</isbn><isbn>9780780355293</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>1999</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNp9jsFOAjEUABuVBFA-QE79gV1fKd3dXgUMeODk1ZC6vNWnSx9puxD4eknw7GkOM4cR4lFBrhTYp9XrbL3OlbU2r7RSenojBsqYKtMWJrdiCGUF2piJLe4uAmyVlaYs-mIY4zdAAeXUDsT7nGIdMNHZJWIvd5i-eBtlw0Gir3lL_lNyI2v2iXzHXZTk912SBxfIfbR4TZ-ZW3ReeuyCay9IRw4_8UH0GtdGHP3xXoxfFm-zZUaIuNkH2rlw2lzv9b_yF7TzR4w</recordid><startdate>1999</startdate><enddate>1999</enddate><creator>Linneberg, C.</creator><creator>Jorgensen, T.M.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>1999</creationdate><title>Discretization methods for encoding of continuous input variables for Boolean neural networks</title><author>Linneberg, C. ; Jorgensen, T.M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-ieee_primary_8311343</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>1999</creationdate><topic>Binary codes</topic><topic>Classification tree analysis</topic><topic>Decision trees</topic><topic>Encoding</topic><topic>Frequency</topic><topic>Input variables</topic><topic>Laboratories</topic><topic>Neural networks</topic><topic>Ribs</topic><topic>Sampling methods</topic><toplevel>online_resources</toplevel><creatorcontrib>Linneberg, C.</creatorcontrib><creatorcontrib>Jorgensen, T.M.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Linneberg, C.</au><au>Jorgensen, T.M.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Discretization methods for encoding of continuous input variables for Boolean neural networks</atitle><btitle>IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)</btitle><stitle>IJCNN</stitle><date>1999</date><risdate>1999</risdate><volume>2</volume><spage>1219</spage><epage>1224 vol.2</epage><pages>1219-1224 vol.2</pages><issn>1098-7576</issn><eissn>1558-3902</eissn><isbn>0780355296</isbn><isbn>9780780355293</isbn><abstract>RAM-based neural networks are normally based on binary input variables, and a thermometer code or a so-called CMAC-Gray code is most often used when encoding real-valued variables. The number of intervals and interval boundaries are normally set from ad hoc principles. Using this approach many intervals are normally needed to provide sufficient resolution. This leads to large variable codes, which again complicates the learning problem. Instead of selecting more or less arbitrary interval boundaries if can be expected to be beneficial to use discretization techniques, where the split-values are selected from the use of information measures. We report on the results that can be obtained by applying local and global discretization techniques together with enhanced schemes of the so-called n-tuple classifier which is the most simple type of a RAM neural net. The enhanced n-tuple nets have proven competitive on a large set of benchmark data sets. By making proper use of the discretization boundaries increased performances can be obtained. The local discretization algorithms are closely connected with the learning principle used for decision trees, and we show how such schemes can be used as variable selectors for the RAM based neural nets.</abstract><pub>IEEE</pub><doi>10.1109/IJCNN.1999.831134</doi></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1098-7576 |
ispartof | IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339), 1999, Vol.2, p.1219-1224 vol.2 |
issn | 1098-7576 1558-3902 |
language | eng |
recordid | cdi_ieee_primary_831134 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Binary codes Classification tree analysis Decision trees Encoding Frequency Input variables Laboratories Neural networks Ribs Sampling methods |
title | Discretization methods for encoding of continuous input variables for Boolean neural networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T16%3A57%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Discretization%20methods%20for%20encoding%20of%20continuous%20input%20variables%20for%20Boolean%20neural%20networks&rft.btitle=IJCNN'99.%20International%20Joint%20Conference%20on%20Neural%20Networks.%20Proceedings%20(Cat.%20No.99CH36339)&rft.au=Linneberg,%20C.&rft.date=1999&rft.volume=2&rft.spage=1219&rft.epage=1224%20vol.2&rft.pages=1219-1224%20vol.2&rft.issn=1098-7576&rft.eissn=1558-3902&rft.isbn=0780355296&rft.isbn_list=9780780355293&rft_id=info:doi/10.1109/IJCNN.1999.831134&rft_dat=%3Cieee_6IE%3E831134%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=831134&rfr_iscdi=true |