BGADAM: Boosting based Genetic-Evolutionary ADAM for Neural Network Optimization

For various optimization methods, gradient descent-based algorithms can achieve outstanding performance and have been widely used in various tasks. Among those commonly used algorithms, ADAM owns many advantages such as fast convergence with both the momentum term and the adaptive learning rate. How...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bai, Jiyang, Ren, Yuxiang, Zhang, Jiawei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Bai, Jiyang
Ren, Yuxiang
Zhang, Jiawei
description For various optimization methods, gradient descent-based algorithms can achieve outstanding performance and have been widely used in various tasks. Among those commonly used algorithms, ADAM owns many advantages such as fast convergence with both the momentum term and the adaptive learning rate. However, since the loss functions of most deep neural networks are non-convex, ADAM also shares the drawback of getting stuck in local optima easily. To resolve such a problem, the idea of combining genetic algorithm with base learners is introduced to rediscover the best solutions. Nonetheless, from our analysis, the idea of combining genetic algorithm with a batch of base learners still has its shortcomings. The effectiveness of genetic algorithm can hardly be guaranteed if the unit models converge to close or the same solutions. To resolve this problem and further maximize the advantages of genetic algorithm with base learners, we propose to implement the boosting strategy for input model training, which can subsequently improve the effectiveness of genetic algorithm. In this paper, we introduce a novel optimization algorithm, namely Boosting based Genetic ADAM (BGADAM). With both theoretic analysis and empirical experiments, we will show that adding the boosting strategy into the BGADAM model can help models jump out the local optima and converge to better solutions.
doi_str_mv 10.48550/arxiv.1908.08015
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1908_08015</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1908_08015</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-67397a09523d6dce9f3c3afc2e727356672d38bbbbe82f2abed0bd8436e14e843</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QIJjx4-wS0sJSIWy6D66ia-RRRpXTloeX09SmM3ZjEZzCLnJWJobKdkdxC9_SrOCmZQZlslL8rasyofy5Z4uQxhG37_TBga0tMIeR98m61PojqMPPcRvOjepC5G-4jFCN2H8DPGDbg-j3_sfmHtX5MJBN-D1Pxdk97jerZ6SzbZ6XpWbBJSWidKi0MAKyYVVtsXCiVaAazlqroVUSnMrTDMFDXccGrSssSYXCrMcJy7I7d_sWak-RL-fHtazWn1WE7-HAEkP</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>BGADAM: Boosting based Genetic-Evolutionary ADAM for Neural Network Optimization</title><source>arXiv.org</source><creator>Bai, Jiyang ; Ren, Yuxiang ; Zhang, Jiawei</creator><creatorcontrib>Bai, Jiyang ; Ren, Yuxiang ; Zhang, Jiawei</creatorcontrib><description>For various optimization methods, gradient descent-based algorithms can achieve outstanding performance and have been widely used in various tasks. Among those commonly used algorithms, ADAM owns many advantages such as fast convergence with both the momentum term and the adaptive learning rate. However, since the loss functions of most deep neural networks are non-convex, ADAM also shares the drawback of getting stuck in local optima easily. To resolve such a problem, the idea of combining genetic algorithm with base learners is introduced to rediscover the best solutions. Nonetheless, from our analysis, the idea of combining genetic algorithm with a batch of base learners still has its shortcomings. The effectiveness of genetic algorithm can hardly be guaranteed if the unit models converge to close or the same solutions. To resolve this problem and further maximize the advantages of genetic algorithm with base learners, we propose to implement the boosting strategy for input model training, which can subsequently improve the effectiveness of genetic algorithm. In this paper, we introduce a novel optimization algorithm, namely Boosting based Genetic ADAM (BGADAM). With both theoretic analysis and empirical experiments, we will show that adding the boosting strategy into the BGADAM model can help models jump out the local optima and converge to better solutions.</description><identifier>DOI: 10.48550/arxiv.1908.08015</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing</subject><creationdate>2019-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1908.08015$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1908.08015$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bai, Jiyang</creatorcontrib><creatorcontrib>Ren, Yuxiang</creatorcontrib><creatorcontrib>Zhang, Jiawei</creatorcontrib><title>BGADAM: Boosting based Genetic-Evolutionary ADAM for Neural Network Optimization</title><description>For various optimization methods, gradient descent-based algorithms can achieve outstanding performance and have been widely used in various tasks. Among those commonly used algorithms, ADAM owns many advantages such as fast convergence with both the momentum term and the adaptive learning rate. However, since the loss functions of most deep neural networks are non-convex, ADAM also shares the drawback of getting stuck in local optima easily. To resolve such a problem, the idea of combining genetic algorithm with base learners is introduced to rediscover the best solutions. Nonetheless, from our analysis, the idea of combining genetic algorithm with a batch of base learners still has its shortcomings. The effectiveness of genetic algorithm can hardly be guaranteed if the unit models converge to close or the same solutions. To resolve this problem and further maximize the advantages of genetic algorithm with base learners, we propose to implement the boosting strategy for input model training, which can subsequently improve the effectiveness of genetic algorithm. In this paper, we introduce a novel optimization algorithm, namely Boosting based Genetic ADAM (BGADAM). With both theoretic analysis and empirical experiments, we will show that adding the boosting strategy into the BGADAM model can help models jump out the local optima and converge to better solutions.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QIJjx4-wS0sJSIWy6D66ia-RRRpXTloeX09SmM3ZjEZzCLnJWJobKdkdxC9_SrOCmZQZlslL8rasyofy5Z4uQxhG37_TBga0tMIeR98m61PojqMPPcRvOjepC5G-4jFCN2H8DPGDbg-j3_sfmHtX5MJBN-D1Pxdk97jerZ6SzbZ6XpWbBJSWidKi0MAKyYVVtsXCiVaAazlqroVUSnMrTDMFDXccGrSssSYXCrMcJy7I7d_sWak-RL-fHtazWn1WE7-HAEkP</recordid><startdate>20190725</startdate><enddate>20190725</enddate><creator>Bai, Jiyang</creator><creator>Ren, Yuxiang</creator><creator>Zhang, Jiawei</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190725</creationdate><title>BGADAM: Boosting based Genetic-Evolutionary ADAM for Neural Network Optimization</title><author>Bai, Jiyang ; Ren, Yuxiang ; Zhang, Jiawei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-67397a09523d6dce9f3c3afc2e727356672d38bbbbe82f2abed0bd8436e14e843</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Bai, Jiyang</creatorcontrib><creatorcontrib>Ren, Yuxiang</creatorcontrib><creatorcontrib>Zhang, Jiawei</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bai, Jiyang</au><au>Ren, Yuxiang</au><au>Zhang, Jiawei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>BGADAM: Boosting based Genetic-Evolutionary ADAM for Neural Network Optimization</atitle><date>2019-07-25</date><risdate>2019</risdate><abstract>For various optimization methods, gradient descent-based algorithms can achieve outstanding performance and have been widely used in various tasks. Among those commonly used algorithms, ADAM owns many advantages such as fast convergence with both the momentum term and the adaptive learning rate. However, since the loss functions of most deep neural networks are non-convex, ADAM also shares the drawback of getting stuck in local optima easily. To resolve such a problem, the idea of combining genetic algorithm with base learners is introduced to rediscover the best solutions. Nonetheless, from our analysis, the idea of combining genetic algorithm with a batch of base learners still has its shortcomings. The effectiveness of genetic algorithm can hardly be guaranteed if the unit models converge to close or the same solutions. To resolve this problem and further maximize the advantages of genetic algorithm with base learners, we propose to implement the boosting strategy for input model training, which can subsequently improve the effectiveness of genetic algorithm. In this paper, we introduce a novel optimization algorithm, namely Boosting based Genetic ADAM (BGADAM). With both theoretic analysis and empirical experiments, we will show that adding the boosting strategy into the BGADAM model can help models jump out the local optima and converge to better solutions.</abstract><doi>10.48550/arxiv.1908.08015</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1908.08015
ispartof
issn
language eng
recordid cdi_arxiv_primary_1908_08015
source arXiv.org
subjects Computer Science - Learning
Computer Science - Neural and Evolutionary Computing
title BGADAM: Boosting based Genetic-Evolutionary ADAM for Neural Network Optimization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T12%3A28%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=BGADAM:%20Boosting%20based%20Genetic-Evolutionary%20ADAM%20for%20Neural%20Network%20Optimization&rft.au=Bai,%20Jiyang&rft.date=2019-07-25&rft_id=info:doi/10.48550/arxiv.1908.08015&rft_dat=%3Carxiv_GOX%3E1908_08015%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true