Convergence of batch gradient algorithm with smoothing composition of group l0 and l1/2 regularization for feedforward neural networks

In this paper, we prove the convergence of batch gradient method for training feedforward neural network; we have proposed a new penalty term based on composition of smoothing L 1 / 2 penalty for weights vectors incoming to hidden nodes and smoothing group L 0 regularization for the resulting vector...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Progress in artificial intelligence 2022, Vol.11 (3), p.269-278
Hauptverfasser: Ramchoun, Hassan, Ettaouil, Mohamed
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 278
container_issue 3
container_start_page 269
container_title Progress in artificial intelligence
container_volume 11
creator Ramchoun, Hassan
Ettaouil, Mohamed
description In this paper, we prove the convergence of batch gradient method for training feedforward neural network; we have proposed a new penalty term based on composition of smoothing L 1 / 2 penalty for weights vectors incoming to hidden nodes and smoothing group L 0 regularization for the resulting vector (BGSGL 0 L 1 / 2 ). This procedure forces weights to become smaller in group level, after training, which allow to remove some redundant hidden nodes. Moreover, it can remove some redundant weights of the surviving hidden nodes. The conditions of convergence are given. The importance of our proposed regularization objective is also tested on numerical examples of classification and regression task.
doi_str_mv 10.1007/s13748-022-00285-3
format Article
fullrecord <record><control><sourceid>proquest_sprin</sourceid><recordid>TN_cdi_proquest_journals_2701552434</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2701552434</sourcerecordid><originalsourceid>FETCH-LOGICAL-p723-cb1cfea592bc45a0472603547ddbb587a9fbdb8809bb109ba8b7de31b1609a673</originalsourceid><addsrcrecordid>eNpFkMtOwzAQRS0EElXpD7CyxDp0_IqTJap4SZXYdB_ZsZOmpHawEyrxAXw3botgM3cWZ2Y0B6FbAvcEQC4jYZIXGVCaAdBCZOwCzSgpaZazHC7_ekGv0SLGHSSKcCCMz9D3yrtPG1rraot9g7Ua6y1ugzKddSNWfetDN273-JAqjnvvx23nWlz7_eBjN3beHcfa4KcB94CVM7gnS4qDbadehe5LnZjGB9xYa1IeVDDY2SmoPsV48OE93qCrRvXRLn5zjjZPj5vVS7Z-e35dPayzQVKW1ZrUjVWipLrmQgGXNAcmuDRGa1FIVTba6KKAUmuSiiq0NJYRTXIoVS7ZHN2d1w7Bf0w2jtXOT8GlixWVQISgnPFEsTMVh5B-teGfIlAdlVdn5VVSXp2UV4z9ALlYdqU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2701552434</pqid></control><display><type>article</type><title>Convergence of batch gradient algorithm with smoothing composition of group l0 and l1/2 regularization for feedforward neural networks</title><source>SpringerNature Journals</source><creator>Ramchoun, Hassan ; Ettaouil, Mohamed</creator><creatorcontrib>Ramchoun, Hassan ; Ettaouil, Mohamed</creatorcontrib><description>In this paper, we prove the convergence of batch gradient method for training feedforward neural network; we have proposed a new penalty term based on composition of smoothing L 1 / 2 penalty for weights vectors incoming to hidden nodes and smoothing group L 0 regularization for the resulting vector (BGSGL 0 L 1 / 2 ). This procedure forces weights to become smaller in group level, after training, which allow to remove some redundant hidden nodes. Moreover, it can remove some redundant weights of the surviving hidden nodes. The conditions of convergence are given. The importance of our proposed regularization objective is also tested on numerical examples of classification and regression task.</description><identifier>ISSN: 2192-6352</identifier><identifier>EISSN: 2192-6360</identifier><identifier>DOI: 10.1007/s13748-022-00285-3</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Algorithms ; Artificial Intelligence ; Artificial neural networks ; Composition ; Computational Intelligence ; Computer Imaging ; Computer Science ; Control ; Convergence ; Data Mining and Knowledge Discovery ; Mechatronics ; Natural Language Processing (NLP) ; Nodes ; Pattern Recognition and Graphics ; Regular Paper ; Regularization ; Robotics ; Smoothing ; Training ; Vision</subject><ispartof>Progress in artificial intelligence, 2022, Vol.11 (3), p.269-278</ispartof><rights>Springer-Verlag GmbH Germany, part of Springer Nature 2022</rights><rights>Springer-Verlag GmbH Germany, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-p723-cb1cfea592bc45a0472603547ddbb587a9fbdb8809bb109ba8b7de31b1609a673</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s13748-022-00285-3$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s13748-022-00285-3$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Ramchoun, Hassan</creatorcontrib><creatorcontrib>Ettaouil, Mohamed</creatorcontrib><title>Convergence of batch gradient algorithm with smoothing composition of group l0 and l1/2 regularization for feedforward neural networks</title><title>Progress in artificial intelligence</title><addtitle>Prog Artif Intell</addtitle><description>In this paper, we prove the convergence of batch gradient method for training feedforward neural network; we have proposed a new penalty term based on composition of smoothing L 1 / 2 penalty for weights vectors incoming to hidden nodes and smoothing group L 0 regularization for the resulting vector (BGSGL 0 L 1 / 2 ). This procedure forces weights to become smaller in group level, after training, which allow to remove some redundant hidden nodes. Moreover, it can remove some redundant weights of the surviving hidden nodes. The conditions of convergence are given. The importance of our proposed regularization objective is also tested on numerical examples of classification and regression task.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Composition</subject><subject>Computational Intelligence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Control</subject><subject>Convergence</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Mechatronics</subject><subject>Natural Language Processing (NLP)</subject><subject>Nodes</subject><subject>Pattern Recognition and Graphics</subject><subject>Regular Paper</subject><subject>Regularization</subject><subject>Robotics</subject><subject>Smoothing</subject><subject>Training</subject><subject>Vision</subject><issn>2192-6352</issn><issn>2192-6360</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid/><recordid>eNpFkMtOwzAQRS0EElXpD7CyxDp0_IqTJap4SZXYdB_ZsZOmpHawEyrxAXw3botgM3cWZ2Y0B6FbAvcEQC4jYZIXGVCaAdBCZOwCzSgpaZazHC7_ekGv0SLGHSSKcCCMz9D3yrtPG1rraot9g7Ua6y1ugzKddSNWfetDN273-JAqjnvvx23nWlz7_eBjN3beHcfa4KcB94CVM7gnS4qDbadehe5LnZjGB9xYa1IeVDDY2SmoPsV48OE93qCrRvXRLn5zjjZPj5vVS7Z-e35dPayzQVKW1ZrUjVWipLrmQgGXNAcmuDRGa1FIVTba6KKAUmuSiiq0NJYRTXIoVS7ZHN2d1w7Bf0w2jtXOT8GlixWVQISgnPFEsTMVh5B-teGfIlAdlVdn5VVSXp2UV4z9ALlYdqU</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Ramchoun, Hassan</creator><creator>Ettaouil, Mohamed</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope/></search><sort><creationdate>2022</creationdate><title>Convergence of batch gradient algorithm with smoothing composition of group l0 and l1/2 regularization for feedforward neural networks</title><author>Ramchoun, Hassan ; Ettaouil, Mohamed</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p723-cb1cfea592bc45a0472603547ddbb587a9fbdb8809bb109ba8b7de31b1609a673</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Composition</topic><topic>Computational Intelligence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Control</topic><topic>Convergence</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Mechatronics</topic><topic>Natural Language Processing (NLP)</topic><topic>Nodes</topic><topic>Pattern Recognition and Graphics</topic><topic>Regular Paper</topic><topic>Regularization</topic><topic>Robotics</topic><topic>Smoothing</topic><topic>Training</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ramchoun, Hassan</creatorcontrib><creatorcontrib>Ettaouil, Mohamed</creatorcontrib><jtitle>Progress in artificial intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ramchoun, Hassan</au><au>Ettaouil, Mohamed</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Convergence of batch gradient algorithm with smoothing composition of group l0 and l1/2 regularization for feedforward neural networks</atitle><jtitle>Progress in artificial intelligence</jtitle><stitle>Prog Artif Intell</stitle><date>2022</date><risdate>2022</risdate><volume>11</volume><issue>3</issue><spage>269</spage><epage>278</epage><pages>269-278</pages><issn>2192-6352</issn><eissn>2192-6360</eissn><abstract>In this paper, we prove the convergence of batch gradient method for training feedforward neural network; we have proposed a new penalty term based on composition of smoothing L 1 / 2 penalty for weights vectors incoming to hidden nodes and smoothing group L 0 regularization for the resulting vector (BGSGL 0 L 1 / 2 ). This procedure forces weights to become smaller in group level, after training, which allow to remove some redundant hidden nodes. Moreover, it can remove some redundant weights of the surviving hidden nodes. The conditions of convergence are given. The importance of our proposed regularization objective is also tested on numerical examples of classification and regression task.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s13748-022-00285-3</doi><tpages>10</tpages></addata></record>
fulltext fulltext
identifier ISSN: 2192-6352
ispartof Progress in artificial intelligence, 2022, Vol.11 (3), p.269-278
issn 2192-6352
2192-6360
language eng
recordid cdi_proquest_journals_2701552434
source SpringerNature Journals
subjects Algorithms
Artificial Intelligence
Artificial neural networks
Composition
Computational Intelligence
Computer Imaging
Computer Science
Control
Convergence
Data Mining and Knowledge Discovery
Mechatronics
Natural Language Processing (NLP)
Nodes
Pattern Recognition and Graphics
Regular Paper
Regularization
Robotics
Smoothing
Training
Vision
title Convergence of batch gradient algorithm with smoothing composition of group l0 and l1/2 regularization for feedforward neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T03%3A43%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_sprin&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Convergence%20of%20batch%20gradient%20algorithm%20with%20smoothing%20composition%20of%20group%20l0%20and%20l1/2%20regularization%20for%20feedforward%20neural%20networks&rft.jtitle=Progress%20in%20artificial%20intelligence&rft.au=Ramchoun,%20Hassan&rft.date=2022&rft.volume=11&rft.issue=3&rft.spage=269&rft.epage=278&rft.pages=269-278&rft.issn=2192-6352&rft.eissn=2192-6360&rft_id=info:doi/10.1007/s13748-022-00285-3&rft_dat=%3Cproquest_sprin%3E2701552434%3C/proquest_sprin%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2701552434&rft_id=info:pmid/&rfr_iscdi=true