Neural ARX Models and PAC Learning

The PAC learning theory creates a framework to assess the learning properties of models such as the required size of the training samples and the similarity between the training and training performances. These properties, along with stochastic stability, form the main characteristics of a typical d...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Hamilton, Howard J
Format: Buchkapitel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 315
container_issue
container_start_page 305
container_title
container_volume 1822
creator Hamilton, Howard J
description The PAC learning theory creates a framework to assess the learning properties of models such as the required size of the training samples and the similarity between the training and training performances. These properties, along with stochastic stability, form the main characteristics of a typical dynamic ARX modeling using neural networks. In this paper, an extension of PAC learning theory is defined which includes ARX modeling tasks, and then based on the new learning theory the learning properties of a family of neural ARX models are evaluated. The issue of stochastic stability of such networks is also addressed. Finally, using the obtained results, a cost function is proposed that considers the learning properties as well as the stochastic stability of a sigmoid neural network and creates a balance between the testing and training performances.
doi_str_mv 10.1007/3-540-45486-1_25
format Book Chapter
fullrecord <record><control><sourceid>proquest_pasca</sourceid><recordid>TN_cdi_pascalfrancis_primary_1381182</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EBC3072099_31_316</sourcerecordid><originalsourceid>FETCH-LOGICAL-p267t-a5a550f86100734b0077b8b22129fffda77871717b06f6dd75465b29aff30f8e3</originalsourceid><addsrcrecordid>eNotUEtPwzAMDk9Rxu4cK8S1I4nzPE4TL2k8hEDiFqVtMgalLUl34N-TjtmWLdn-_PgQOid4RjCWV1BwhgvGmRIFMZTvoamWClJymyP7KCOCkAKA6QN0OhaE5FyyQ5RhwLTQksExyjRXnALR9ARNY_zESYBiJWSGLh7dJtgmn7-85w9d7ZqY27bOn-eLfOlsaNft6gwdedtEN93FCXq7uX5d3BXLp9v7xXxZ9FTIobDcco59OiudDqxMXpaqpJRQ7b2vrZRKkqQlFl7UteRM8JJq6z0kmIMJuvyf29tY2cYH21braPqw_rbh1xBQhCia2mb_bTFV2pULpuy6r2gINuNmAybRYLYEmZG0BIDd3ND9bFwcjBsRlWuH9Hn1YfvBhWgAS4q1NkCSCfgDk5dmDQ</addsrcrecordid><sourcetype>Index Database</sourcetype><iscdi>true</iscdi><recordtype>book_chapter</recordtype><pqid>EBC3072099_31_316</pqid></control><display><type>book_chapter</type><title>Neural ARX Models and PAC Learning</title><source>Springer Books</source><creator>Hamilton, Howard J</creator><contributor>Hamilton, Howard J.</contributor><creatorcontrib>Hamilton, Howard J ; Hamilton, Howard J.</creatorcontrib><description>The PAC learning theory creates a framework to assess the learning properties of models such as the required size of the training samples and the similarity between the training and training performances. These properties, along with stochastic stability, form the main characteristics of a typical dynamic ARX modeling using neural networks. In this paper, an extension of PAC learning theory is defined which includes ARX modeling tasks, and then based on the new learning theory the learning properties of a family of neural ARX models are evaluated. The issue of stochastic stability of such networks is also addressed. Finally, using the obtained results, a cost function is proposed that considers the learning properties as well as the stochastic stability of a sigmoid neural network and creates a balance between the testing and training performances.</description><identifier>ISSN: 0302-9743</identifier><identifier>ISBN: 3540675574</identifier><identifier>ISBN: 9783540675570</identifier><identifier>EISSN: 1611-3349</identifier><identifier>EISBN: 9783540454861</identifier><identifier>EISBN: 3540454861</identifier><identifier>DOI: 10.1007/3-540-45486-1_25</identifier><identifier>OCLC: 958523192</identifier><identifier>LCCallNum: Q334-342</identifier><language>eng</language><publisher>Germany: Springer Berlin / Heidelberg</publisher><subject>Applied sciences ; Artificial intelligence ; Computer science; control theory; systems ; Connectionism. Neural networks ; Evolutionary Programming ; Exact sciences and technology ; Learning Theory ; Neural Networks ; Nonlinear ARX Models</subject><ispartof>Advances in Artificial Intelligence, 2000, Vol.1822, p.305-315</ispartof><rights>Springer-Verlag Berlin Heidelberg 2000</rights><rights>2000 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><relation>Lecture Notes in Computer Science</relation></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Uhttps://ebookcentral.proquest.com/covers/3072099-l.jpg</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/3-540-45486-1_25$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/3-540-45486-1_25$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>309,310,775,776,780,785,786,789,4036,4037,27902,38232,41418,42487</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=1381182$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><contributor>Hamilton, Howard J.</contributor><creatorcontrib>Hamilton, Howard J</creatorcontrib><title>Neural ARX Models and PAC Learning</title><title>Advances in Artificial Intelligence</title><description>The PAC learning theory creates a framework to assess the learning properties of models such as the required size of the training samples and the similarity between the training and training performances. These properties, along with stochastic stability, form the main characteristics of a typical dynamic ARX modeling using neural networks. In this paper, an extension of PAC learning theory is defined which includes ARX modeling tasks, and then based on the new learning theory the learning properties of a family of neural ARX models are evaluated. The issue of stochastic stability of such networks is also addressed. Finally, using the obtained results, a cost function is proposed that considers the learning properties as well as the stochastic stability of a sigmoid neural network and creates a balance between the testing and training performances.</description><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Computer science; control theory; systems</subject><subject>Connectionism. Neural networks</subject><subject>Evolutionary Programming</subject><subject>Exact sciences and technology</subject><subject>Learning Theory</subject><subject>Neural Networks</subject><subject>Nonlinear ARX Models</subject><issn>0302-9743</issn><issn>1611-3349</issn><isbn>3540675574</isbn><isbn>9783540675570</isbn><isbn>9783540454861</isbn><isbn>3540454861</isbn><fulltext>true</fulltext><rsrctype>book_chapter</rsrctype><creationdate>2000</creationdate><recordtype>book_chapter</recordtype><recordid>eNotUEtPwzAMDk9Rxu4cK8S1I4nzPE4TL2k8hEDiFqVtMgalLUl34N-TjtmWLdn-_PgQOid4RjCWV1BwhgvGmRIFMZTvoamWClJymyP7KCOCkAKA6QN0OhaE5FyyQ5RhwLTQksExyjRXnALR9ARNY_zESYBiJWSGLh7dJtgmn7-85w9d7ZqY27bOn-eLfOlsaNft6gwdedtEN93FCXq7uX5d3BXLp9v7xXxZ9FTIobDcco59OiudDqxMXpaqpJRQ7b2vrZRKkqQlFl7UteRM8JJq6z0kmIMJuvyf29tY2cYH21braPqw_rbh1xBQhCia2mb_bTFV2pULpuy6r2gINuNmAybRYLYEmZG0BIDd3ND9bFwcjBsRlWuH9Hn1YfvBhWgAS4q1NkCSCfgDk5dmDQ</recordid><startdate>2000</startdate><enddate>2000</enddate><creator>Hamilton, Howard J</creator><general>Springer Berlin / Heidelberg</general><general>Springer Berlin Heidelberg</general><general>Springer</general><scope>FFUUA</scope><scope>IQODW</scope></search><sort><creationdate>2000</creationdate><title>Neural ARX Models and PAC Learning</title><author>Hamilton, Howard J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p267t-a5a550f86100734b0077b8b22129fffda77871717b06f6dd75465b29aff30f8e3</frbrgroupid><rsrctype>book_chapters</rsrctype><prefilter>book_chapters</prefilter><language>eng</language><creationdate>2000</creationdate><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Computer science; control theory; systems</topic><topic>Connectionism. Neural networks</topic><topic>Evolutionary Programming</topic><topic>Exact sciences and technology</topic><topic>Learning Theory</topic><topic>Neural Networks</topic><topic>Nonlinear ARX Models</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hamilton, Howard J</creatorcontrib><collection>ProQuest Ebook Central - Book Chapters - Demo use only</collection><collection>Pascal-Francis</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hamilton, Howard J</au><au>Hamilton, Howard J.</au><format>book</format><genre>bookitem</genre><ristype>CHAP</ristype><atitle>Neural ARX Models and PAC Learning</atitle><btitle>Advances in Artificial Intelligence</btitle><seriestitle>Lecture Notes in Computer Science</seriestitle><date>2000</date><risdate>2000</risdate><volume>1822</volume><spage>305</spage><epage>315</epage><pages>305-315</pages><issn>0302-9743</issn><eissn>1611-3349</eissn><isbn>3540675574</isbn><isbn>9783540675570</isbn><eisbn>9783540454861</eisbn><eisbn>3540454861</eisbn><abstract>The PAC learning theory creates a framework to assess the learning properties of models such as the required size of the training samples and the similarity between the training and training performances. These properties, along with stochastic stability, form the main characteristics of a typical dynamic ARX modeling using neural networks. In this paper, an extension of PAC learning theory is defined which includes ARX modeling tasks, and then based on the new learning theory the learning properties of a family of neural ARX models are evaluated. The issue of stochastic stability of such networks is also addressed. Finally, using the obtained results, a cost function is proposed that considers the learning properties as well as the stochastic stability of a sigmoid neural network and creates a balance between the testing and training performances.</abstract><cop>Germany</cop><pub>Springer Berlin / Heidelberg</pub><doi>10.1007/3-540-45486-1_25</doi><oclcid>958523192</oclcid><tpages>11</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0302-9743
ispartof Advances in Artificial Intelligence, 2000, Vol.1822, p.305-315
issn 0302-9743
1611-3349
language eng
recordid cdi_pascalfrancis_primary_1381182
source Springer Books
subjects Applied sciences
Artificial intelligence
Computer science
control theory
systems
Connectionism. Neural networks
Evolutionary Programming
Exact sciences and technology
Learning Theory
Neural Networks
Nonlinear ARX Models
title Neural ARX Models and PAC Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T13%3A26%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pasca&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=bookitem&rft.atitle=Neural%20ARX%20Models%20and%20PAC%20Learning&rft.btitle=Advances%20in%20Artificial%20Intelligence&rft.au=Hamilton,%20Howard%20J&rft.date=2000&rft.volume=1822&rft.spage=305&rft.epage=315&rft.pages=305-315&rft.issn=0302-9743&rft.eissn=1611-3349&rft.isbn=3540675574&rft.isbn_list=9783540675570&rft_id=info:doi/10.1007/3-540-45486-1_25&rft_dat=%3Cproquest_pasca%3EEBC3072099_31_316%3C/proquest_pasca%3E%3Curl%3E%3C/url%3E&rft.eisbn=9783540454861&rft.eisbn_list=3540454861&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=EBC3072099_31_316&rft_id=info:pmid/&rfr_iscdi=true