Multilayered neural architectures evolution for computing sequences of orthogonal polynomials

This article presents an evolutionary algorithm to autonomously construct full-connected multilayered feedforward neural architectures. This algorithm employs grammar-guided genetic programming with a context-free grammar that has been specifically designed to satisfy three important restrictions. F...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Annals of mathematics and artificial intelligence 2018-12, Vol.84 (3-4), p.161-184
Hauptverfasser: Barrios Rolanía, Dolores, Delgado Martínez, Guillermo, Manrique, Daniel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 184
container_issue 3-4
container_start_page 161
container_title Annals of mathematics and artificial intelligence
container_volume 84
creator Barrios Rolanía, Dolores
Delgado Martínez, Guillermo
Manrique, Daniel
description This article presents an evolutionary algorithm to autonomously construct full-connected multilayered feedforward neural architectures. This algorithm employs grammar-guided genetic programming with a context-free grammar that has been specifically designed to satisfy three important restrictions. First, the sentences that belong to the language produced by the grammar only encode all valid neural architectures. Second, full-connected feedforward neural architectures of any size can be generated. Third, smaller-sized neural architectures are favored to avoid overfitting. The proposed evolutionary neural architectures construction system is applied to compute the terms of the two sequences that define the three-term recurrence relation associated with a sequence of orthogonal polynomials. This application imposes an important constraint: training datasets are always very small. Therefore, an adequate sized neural architecture has to be evolved to achieve satisfactory results, which are presented in terms of accuracy and size of the evolved neural architectures, and convergence speed of the evolutionary process.
doi_str_mv 10.1007/s10472-018-9601-2
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2918200858</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A716217434</galeid><sourcerecordid>A716217434</sourcerecordid><originalsourceid>FETCH-LOGICAL-c355t-f0277817ed914c9d9afdc89a1b5618400640d2b353cf62d73fb7649b4c925fc03</originalsourceid><addsrcrecordid>eNp1kE9LwzAYh4soOKcfwFvBc-ebNE3S4xj-g4kXPUpI06TraJOZpMK-vRkVPEkObxKe5-XHL8tuEawQALsPCAjDBSBe1BRQgc-yBapYWTDC4DzdAeECE1JeZlch7AGgppwuss_XaYj9II_a6za3evJyyKVXuz5qFSevQ66_3TDF3tncOJ8rNx7Sy3Z50F-TtioRzuTOx53rnE32wQ1H68ZeDuE6uzBp6Jvfucw-Hh_eN8_F9u3pZbPeFqqsqlgYwIxxxHRbI6LqtpamVbyWqKko4gSAEmhxU1alMhS3rDQNo6RuEosro6BcZnfz3oN3KVSIYu8mn8IEgWvEMQCveKJWM9XJQYveGhe9VOm0euyVs9r06X_NEMWIkZIkAc2C8i4Er404-H6U_igQiFPtYq5dpNrFqXaBk4NnJyTWdtr_Rflf-gGODoZ6</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918200858</pqid></control><display><type>article</type><title>Multilayered neural architectures evolution for computing sequences of orthogonal polynomials</title><source>SpringerLink Journals - AutoHoldings</source><source>ProQuest Central</source><creator>Barrios Rolanía, Dolores ; Delgado Martínez, Guillermo ; Manrique, Daniel</creator><creatorcontrib>Barrios Rolanía, Dolores ; Delgado Martínez, Guillermo ; Manrique, Daniel</creatorcontrib><description>This article presents an evolutionary algorithm to autonomously construct full-connected multilayered feedforward neural architectures. This algorithm employs grammar-guided genetic programming with a context-free grammar that has been specifically designed to satisfy three important restrictions. First, the sentences that belong to the language produced by the grammar only encode all valid neural architectures. Second, full-connected feedforward neural architectures of any size can be generated. Third, smaller-sized neural architectures are favored to avoid overfitting. The proposed evolutionary neural architectures construction system is applied to compute the terms of the two sequences that define the three-term recurrence relation associated with a sequence of orthogonal polynomials. This application imposes an important constraint: training datasets are always very small. Therefore, an adequate sized neural architecture has to be evolved to achieve satisfactory results, which are presented in terms of accuracy and size of the evolved neural architectures, and convergence speed of the evolutionary process.</description><identifier>ISSN: 1012-2443</identifier><identifier>EISSN: 1573-7470</identifier><identifier>DOI: 10.1007/s10472-018-9601-2</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Algorithms ; Artificial Intelligence ; Complex Systems ; Computer Science ; Evolutionary algorithms ; Genetic algorithms ; Grammar ; Language ; Machine learning ; Mathematics ; Neural networks ; Polynomials ; Regularization methods ; Sequences</subject><ispartof>Annals of mathematics and artificial intelligence, 2018-12, Vol.84 (3-4), p.161-184</ispartof><rights>Springer Nature Switzerland AG 2018</rights><rights>COPYRIGHT 2018 Springer</rights><rights>Springer Nature Switzerland AG 2018.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c355t-f0277817ed914c9d9afdc89a1b5618400640d2b353cf62d73fb7649b4c925fc03</citedby><cites>FETCH-LOGICAL-c355t-f0277817ed914c9d9afdc89a1b5618400640d2b353cf62d73fb7649b4c925fc03</cites><orcidid>0000-0002-0792-4156</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10472-018-9601-2$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2918200858?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,776,780,21367,27901,27902,33721,41464,42533,43781,51294</link.rule.ids></links><search><creatorcontrib>Barrios Rolanía, Dolores</creatorcontrib><creatorcontrib>Delgado Martínez, Guillermo</creatorcontrib><creatorcontrib>Manrique, Daniel</creatorcontrib><title>Multilayered neural architectures evolution for computing sequences of orthogonal polynomials</title><title>Annals of mathematics and artificial intelligence</title><addtitle>Ann Math Artif Intell</addtitle><description>This article presents an evolutionary algorithm to autonomously construct full-connected multilayered feedforward neural architectures. This algorithm employs grammar-guided genetic programming with a context-free grammar that has been specifically designed to satisfy three important restrictions. First, the sentences that belong to the language produced by the grammar only encode all valid neural architectures. Second, full-connected feedforward neural architectures of any size can be generated. Third, smaller-sized neural architectures are favored to avoid overfitting. The proposed evolutionary neural architectures construction system is applied to compute the terms of the two sequences that define the three-term recurrence relation associated with a sequence of orthogonal polynomials. This application imposes an important constraint: training datasets are always very small. Therefore, an adequate sized neural architecture has to be evolved to achieve satisfactory results, which are presented in terms of accuracy and size of the evolved neural architectures, and convergence speed of the evolutionary process.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Complex Systems</subject><subject>Computer Science</subject><subject>Evolutionary algorithms</subject><subject>Genetic algorithms</subject><subject>Grammar</subject><subject>Language</subject><subject>Machine learning</subject><subject>Mathematics</subject><subject>Neural networks</subject><subject>Polynomials</subject><subject>Regularization methods</subject><subject>Sequences</subject><issn>1012-2443</issn><issn>1573-7470</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp1kE9LwzAYh4soOKcfwFvBc-ebNE3S4xj-g4kXPUpI06TraJOZpMK-vRkVPEkObxKe5-XHL8tuEawQALsPCAjDBSBe1BRQgc-yBapYWTDC4DzdAeECE1JeZlch7AGgppwuss_XaYj9II_a6za3evJyyKVXuz5qFSevQ66_3TDF3tncOJ8rNx7Sy3Z50F-TtioRzuTOx53rnE32wQ1H68ZeDuE6uzBp6Jvfucw-Hh_eN8_F9u3pZbPeFqqsqlgYwIxxxHRbI6LqtpamVbyWqKko4gSAEmhxU1alMhS3rDQNo6RuEosro6BcZnfz3oN3KVSIYu8mn8IEgWvEMQCveKJWM9XJQYveGhe9VOm0euyVs9r06X_NEMWIkZIkAc2C8i4Er404-H6U_igQiFPtYq5dpNrFqXaBk4NnJyTWdtr_Rflf-gGODoZ6</recordid><startdate>20181201</startdate><enddate>20181201</enddate><creator>Barrios Rolanía, Dolores</creator><creator>Delgado Martínez, Guillermo</creator><creator>Manrique, Daniel</creator><general>Springer International Publishing</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><orcidid>https://orcid.org/0000-0002-0792-4156</orcidid></search><sort><creationdate>20181201</creationdate><title>Multilayered neural architectures evolution for computing sequences of orthogonal polynomials</title><author>Barrios Rolanía, Dolores ; Delgado Martínez, Guillermo ; Manrique, Daniel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c355t-f0277817ed914c9d9afdc89a1b5618400640d2b353cf62d73fb7649b4c925fc03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Complex Systems</topic><topic>Computer Science</topic><topic>Evolutionary algorithms</topic><topic>Genetic algorithms</topic><topic>Grammar</topic><topic>Language</topic><topic>Machine learning</topic><topic>Mathematics</topic><topic>Neural networks</topic><topic>Polynomials</topic><topic>Regularization methods</topic><topic>Sequences</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Barrios Rolanía, Dolores</creatorcontrib><creatorcontrib>Delgado Martínez, Guillermo</creatorcontrib><creatorcontrib>Manrique, Daniel</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><jtitle>Annals of mathematics and artificial intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Barrios Rolanía, Dolores</au><au>Delgado Martínez, Guillermo</au><au>Manrique, Daniel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multilayered neural architectures evolution for computing sequences of orthogonal polynomials</atitle><jtitle>Annals of mathematics and artificial intelligence</jtitle><stitle>Ann Math Artif Intell</stitle><date>2018-12-01</date><risdate>2018</risdate><volume>84</volume><issue>3-4</issue><spage>161</spage><epage>184</epage><pages>161-184</pages><issn>1012-2443</issn><eissn>1573-7470</eissn><abstract>This article presents an evolutionary algorithm to autonomously construct full-connected multilayered feedforward neural architectures. This algorithm employs grammar-guided genetic programming with a context-free grammar that has been specifically designed to satisfy three important restrictions. First, the sentences that belong to the language produced by the grammar only encode all valid neural architectures. Second, full-connected feedforward neural architectures of any size can be generated. Third, smaller-sized neural architectures are favored to avoid overfitting. The proposed evolutionary neural architectures construction system is applied to compute the terms of the two sequences that define the three-term recurrence relation associated with a sequence of orthogonal polynomials. This application imposes an important constraint: training datasets are always very small. Therefore, an adequate sized neural architecture has to be evolved to achieve satisfactory results, which are presented in terms of accuracy and size of the evolved neural architectures, and convergence speed of the evolutionary process.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><doi>10.1007/s10472-018-9601-2</doi><tpages>24</tpages><orcidid>https://orcid.org/0000-0002-0792-4156</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1012-2443
ispartof Annals of mathematics and artificial intelligence, 2018-12, Vol.84 (3-4), p.161-184
issn 1012-2443
1573-7470
language eng
recordid cdi_proquest_journals_2918200858
source SpringerLink Journals - AutoHoldings; ProQuest Central
subjects Algorithms
Artificial Intelligence
Complex Systems
Computer Science
Evolutionary algorithms
Genetic algorithms
Grammar
Language
Machine learning
Mathematics
Neural networks
Polynomials
Regularization methods
Sequences
title Multilayered neural architectures evolution for computing sequences of orthogonal polynomials
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T02%3A34%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multilayered%20neural%20architectures%20evolution%20for%20computing%20sequences%20of%20orthogonal%20polynomials&rft.jtitle=Annals%20of%20mathematics%20and%20artificial%20intelligence&rft.au=Barrios%20Rolan%C3%ADa,%20Dolores&rft.date=2018-12-01&rft.volume=84&rft.issue=3-4&rft.spage=161&rft.epage=184&rft.pages=161-184&rft.issn=1012-2443&rft.eissn=1573-7470&rft_id=info:doi/10.1007/s10472-018-9601-2&rft_dat=%3Cgale_proqu%3EA716217434%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918200858&rft_id=info:pmid/&rft_galeid=A716217434&rfr_iscdi=true