Pruning recurrent neural networks for improved generalization performance
Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristi...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on neural networks 1994-09, Vol.5 (5), p.848-851 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 851 |
---|---|
container_issue | 5 |
container_start_page | 848 |
container_title | IEEE transactions on neural networks |
container_volume | 5 |
creator | Giles, C.L. Omlin, C.W. |
description | Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that rules extracted from networks trained with this pruning heuristic are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state, triple-parity grammar. Further simulations indicate that this pruning method can have generalization performance superior to that obtained by training with weight decay.< > |
doi_str_mv | 10.1109/72.317740 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_317740</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>317740</ieee_id><sourcerecordid>28527668</sourcerecordid><originalsourceid>FETCH-LOGICAL-c332t-444743c43abc9c47acaa87b69127ee8828abe6459f25382081e3a87468b638883</originalsourceid><addsrcrecordid>eNp9kEtLw0AUhQdRbK0u3LqQrBQXqfPKPJZSfBQKutB1mExvSjSZ1JlE0V_vlATduToXzncOl4PQKcFzQrC-lnTOiJQc76Ep0ZykGGu2H2_Ms1RTKifoKIRXjAnPsDhEE6KokErgKVo--d5VbpN4sL334LrEQe9NHaX7bP1bSMrWJ1Wz9e0HrJMNOIhu9W26qnXJFny0G-MsHKOD0tQBTkadoZe72-fFQ7p6vF8ublapZYx2KedccmY5M4XVlktjjVGyEJpQCaAUVaYAwTNd0owpihUBFgEuVCGYUorN0OXQGz967yF0eVMFC3VtHLR9yCXjVFKW6Uhe_EtSlVEpxK7yagCtb0PwUOZbXzXGf-UE57uFc0nzYeHIno-lfdHA-o8cJ43A2QBUAPBrj-kfFwN9wQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>28527668</pqid></control><display><type>article</type><title>Pruning recurrent neural networks for improved generalization performance</title><source>IEEE Electronic Library (IEL)</source><creator>Giles, C.L. ; Omlin, C.W.</creator><creatorcontrib>Giles, C.L. ; Omlin, C.W.</creatorcontrib><description>Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that rules extracted from networks trained with this pruning heuristic are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state, triple-parity grammar. Further simulations indicate that this pruning method can have generalization performance superior to that obtained by training with weight decay.< ></description><identifier>ISSN: 1045-9227</identifier><identifier>EISSN: 1941-0093</identifier><identifier>DOI: 10.1109/72.317740</identifier><identifier>PMID: 18267860</identifier><identifier>CODEN: ITNNEP</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Clustering algorithms ; Doped fiber amplifiers ; Learning automata ; National electric code ; Neural networks ; Neurons ; Quantization ; Recurrent neural networks ; Space exploration ; State-space methods</subject><ispartof>IEEE transactions on neural networks, 1994-09, Vol.5 (5), p.848-851</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c332t-444743c43abc9c47acaa87b69127ee8828abe6459f25382081e3a87468b638883</citedby><cites>FETCH-LOGICAL-c332t-444743c43abc9c47acaa87b69127ee8828abe6459f25382081e3a87468b638883</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/317740$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/317740$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/18267860$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Giles, C.L.</creatorcontrib><creatorcontrib>Omlin, C.W.</creatorcontrib><title>Pruning recurrent neural networks for improved generalization performance</title><title>IEEE transactions on neural networks</title><addtitle>TNN</addtitle><addtitle>IEEE Trans Neural Netw</addtitle><description>Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that rules extracted from networks trained with this pruning heuristic are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state, triple-parity grammar. Further simulations indicate that this pruning method can have generalization performance superior to that obtained by training with weight decay.< ></description><subject>Clustering algorithms</subject><subject>Doped fiber amplifiers</subject><subject>Learning automata</subject><subject>National electric code</subject><subject>Neural networks</subject><subject>Neurons</subject><subject>Quantization</subject><subject>Recurrent neural networks</subject><subject>Space exploration</subject><subject>State-space methods</subject><issn>1045-9227</issn><issn>1941-0093</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1994</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLw0AUhQdRbK0u3LqQrBQXqfPKPJZSfBQKutB1mExvSjSZ1JlE0V_vlATduToXzncOl4PQKcFzQrC-lnTOiJQc76Ep0ZykGGu2H2_Ms1RTKifoKIRXjAnPsDhEE6KokErgKVo--d5VbpN4sL334LrEQe9NHaX7bP1bSMrWJ1Wz9e0HrJMNOIhu9W26qnXJFny0G-MsHKOD0tQBTkadoZe72-fFQ7p6vF8ublapZYx2KedccmY5M4XVlktjjVGyEJpQCaAUVaYAwTNd0owpihUBFgEuVCGYUorN0OXQGz967yF0eVMFC3VtHLR9yCXjVFKW6Uhe_EtSlVEpxK7yagCtb0PwUOZbXzXGf-UE57uFc0nzYeHIno-lfdHA-o8cJ43A2QBUAPBrj-kfFwN9wQ</recordid><startdate>19940901</startdate><enddate>19940901</enddate><creator>Giles, C.L.</creator><creator>Omlin, C.W.</creator><general>IEEE</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope></search><sort><creationdate>19940901</creationdate><title>Pruning recurrent neural networks for improved generalization performance</title><author>Giles, C.L. ; Omlin, C.W.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c332t-444743c43abc9c47acaa87b69127ee8828abe6459f25382081e3a87468b638883</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1994</creationdate><topic>Clustering algorithms</topic><topic>Doped fiber amplifiers</topic><topic>Learning automata</topic><topic>National electric code</topic><topic>Neural networks</topic><topic>Neurons</topic><topic>Quantization</topic><topic>Recurrent neural networks</topic><topic>Space exploration</topic><topic>State-space methods</topic><toplevel>online_resources</toplevel><creatorcontrib>Giles, C.L.</creatorcontrib><creatorcontrib>Omlin, C.W.</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Giles, C.L.</au><au>Omlin, C.W.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Pruning recurrent neural networks for improved generalization performance</atitle><jtitle>IEEE transactions on neural networks</jtitle><stitle>TNN</stitle><addtitle>IEEE Trans Neural Netw</addtitle><date>1994-09-01</date><risdate>1994</risdate><volume>5</volume><issue>5</issue><spage>848</spage><epage>851</epage><pages>848-851</pages><issn>1045-9227</issn><eissn>1941-0093</eissn><coden>ITNNEP</coden><abstract>Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that rules extracted from networks trained with this pruning heuristic are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state, triple-parity grammar. Further simulations indicate that this pruning method can have generalization performance superior to that obtained by training with weight decay.< ></abstract><cop>United States</cop><pub>IEEE</pub><pmid>18267860</pmid><doi>10.1109/72.317740</doi><tpages>4</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1045-9227 |
ispartof | IEEE transactions on neural networks, 1994-09, Vol.5 (5), p.848-851 |
issn | 1045-9227 1941-0093 |
language | eng |
recordid | cdi_ieee_primary_317740 |
source | IEEE Electronic Library (IEL) |
subjects | Clustering algorithms Doped fiber amplifiers Learning automata National electric code Neural networks Neurons Quantization Recurrent neural networks Space exploration State-space methods |
title | Pruning recurrent neural networks for improved generalization performance |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T14%3A59%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Pruning%20recurrent%20neural%20networks%20for%20improved%20generalization%20performance&rft.jtitle=IEEE%20transactions%20on%20neural%20networks&rft.au=Giles,%20C.L.&rft.date=1994-09-01&rft.volume=5&rft.issue=5&rft.spage=848&rft.epage=851&rft.pages=848-851&rft.issn=1045-9227&rft.eissn=1941-0093&rft.coden=ITNNEP&rft_id=info:doi/10.1109/72.317740&rft_dat=%3Cproquest_RIE%3E28527668%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=28527668&rft_id=info:pmid/18267860&rft_ieee_id=317740&rfr_iscdi=true |