Multi-output incremental back-propagation

Deep learning techniques can form generalized models that can solve any problem that is not solvable by traditional approaches. It explains the omnipresence of deep learning models across all domains. However, a lot of time is spent on finding the optimal hyperparameters to help the model generalize...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computing & applications 2023-07, Vol.35 (20), p.14897-14910
Hauptverfasser: Chaudhari, Rachana, Agarwal, Dhwani, Ravishankar, Kritika, Masand, Nikita, Sambhe, Vijay K., Udmale, Sandeep S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 14910
container_issue 20
container_start_page 14897
container_title Neural computing & applications
container_volume 35
creator Chaudhari, Rachana
Agarwal, Dhwani
Ravishankar, Kritika
Masand, Nikita
Sambhe, Vijay K.
Udmale, Sandeep S.
description Deep learning techniques can form generalized models that can solve any problem that is not solvable by traditional approaches. It explains the omnipresence of deep learning models across all domains. However, a lot of time is spent on finding the optimal hyperparameters to help the model generalize and give the highest accuracy. This paper investigates a proposed model incorporating hybrid layers and a novel approach for weight initialization aimed at—(1) Reducing the overall trial and error time spent in finding the optimal number of layers by providing the necessary insights. (2) Reducing the randomness in weight initialization with the help of a novel incremental backpropagation based model architecture. The model, along with the principal component analysis-based initialization, substantially provides a stable weight initialization, thereby improving the train and test performance and speeding up the process of convergence to an optimal solution. Furthermore, three data sets were tested on the proposed approach, and they outperformed the state-of-the-art initialization methods.
doi_str_mv 10.1007/s00521-023-08490-4
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2821740925</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2821740925</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-c7857d61de76110b748c6d1ff54d887e2d8077f7e90301dac32e18c58543bb9d3</originalsourceid><addsrcrecordid>eNp9kLtOxDAQRS0EEsvjB6hWoqIwzPgROyVa8ZIW0UBtObazyrKbBNsp-HsMQaKjmmLuuVc6hFwgXCOAukkAkiEFxiloUQMVB2SBgnPKQepDsoBalHcl-DE5SWkLAKLSckGunqdd7ugw5XHKy653MexDn-1u2Vj3Tsc4jHZjczf0Z-SotbsUzn_vKXm7v3tdPdL1y8PT6nZNHcc6U6e0VL5CH1SFCI0S2lUe21YKr7UKzGtQqlWhBg7oreMsoHZSS8Gbpvb8lFzOvWX7Ywopm-0wxb5MGqYZKgE1kyXF5pSLQ0oxtGaM3d7GT4NgvpWYWYkpSsyPEiMKxGcolXC_CfGv-h_qC9C-YoA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2821740925</pqid></control><display><type>article</type><title>Multi-output incremental back-propagation</title><source>SpringerLink Journals - AutoHoldings</source><creator>Chaudhari, Rachana ; Agarwal, Dhwani ; Ravishankar, Kritika ; Masand, Nikita ; Sambhe, Vijay K. ; Udmale, Sandeep S.</creator><creatorcontrib>Chaudhari, Rachana ; Agarwal, Dhwani ; Ravishankar, Kritika ; Masand, Nikita ; Sambhe, Vijay K. ; Udmale, Sandeep S.</creatorcontrib><description>Deep learning techniques can form generalized models that can solve any problem that is not solvable by traditional approaches. It explains the omnipresence of deep learning models across all domains. However, a lot of time is spent on finding the optimal hyperparameters to help the model generalize and give the highest accuracy. This paper investigates a proposed model incorporating hybrid layers and a novel approach for weight initialization aimed at—(1) Reducing the overall trial and error time spent in finding the optimal number of layers by providing the necessary insights. (2) Reducing the randomness in weight initialization with the help of a novel incremental backpropagation based model architecture. The model, along with the principal component analysis-based initialization, substantially provides a stable weight initialization, thereby improving the train and test performance and speeding up the process of convergence to an optimal solution. Furthermore, three data sets were tested on the proposed approach, and they outperformed the state-of-the-art initialization methods.</description><identifier>ISSN: 0941-0643</identifier><identifier>EISSN: 1433-3058</identifier><identifier>DOI: 10.1007/s00521-023-08490-4</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Artificial Intelligence ; Back propagation ; Computational Biology/Bioinformatics ; Computational Science and Engineering ; Computer Science ; Data Mining and Knowledge Discovery ; Deep learning ; Image Processing and Computer Vision ; Original Article ; Principal components analysis ; Probability and Statistics in Computer Science</subject><ispartof>Neural computing &amp; applications, 2023-07, Vol.35 (20), p.14897-14910</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-c7857d61de76110b748c6d1ff54d887e2d8077f7e90301dac32e18c58543bb9d3</citedby><cites>FETCH-LOGICAL-c319t-c7857d61de76110b748c6d1ff54d887e2d8077f7e90301dac32e18c58543bb9d3</cites><orcidid>0000-0001-9498-1230</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00521-023-08490-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00521-023-08490-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Chaudhari, Rachana</creatorcontrib><creatorcontrib>Agarwal, Dhwani</creatorcontrib><creatorcontrib>Ravishankar, Kritika</creatorcontrib><creatorcontrib>Masand, Nikita</creatorcontrib><creatorcontrib>Sambhe, Vijay K.</creatorcontrib><creatorcontrib>Udmale, Sandeep S.</creatorcontrib><title>Multi-output incremental back-propagation</title><title>Neural computing &amp; applications</title><addtitle>Neural Comput &amp; Applic</addtitle><description>Deep learning techniques can form generalized models that can solve any problem that is not solvable by traditional approaches. It explains the omnipresence of deep learning models across all domains. However, a lot of time is spent on finding the optimal hyperparameters to help the model generalize and give the highest accuracy. This paper investigates a proposed model incorporating hybrid layers and a novel approach for weight initialization aimed at—(1) Reducing the overall trial and error time spent in finding the optimal number of layers by providing the necessary insights. (2) Reducing the randomness in weight initialization with the help of a novel incremental backpropagation based model architecture. The model, along with the principal component analysis-based initialization, substantially provides a stable weight initialization, thereby improving the train and test performance and speeding up the process of convergence to an optimal solution. Furthermore, three data sets were tested on the proposed approach, and they outperformed the state-of-the-art initialization methods.</description><subject>Artificial Intelligence</subject><subject>Back propagation</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Deep learning</subject><subject>Image Processing and Computer Vision</subject><subject>Original Article</subject><subject>Principal components analysis</subject><subject>Probability and Statistics in Computer Science</subject><issn>0941-0643</issn><issn>1433-3058</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp9kLtOxDAQRS0EEsvjB6hWoqIwzPgROyVa8ZIW0UBtObazyrKbBNsp-HsMQaKjmmLuuVc6hFwgXCOAukkAkiEFxiloUQMVB2SBgnPKQepDsoBalHcl-DE5SWkLAKLSckGunqdd7ugw5XHKy653MexDn-1u2Vj3Tsc4jHZjczf0Z-SotbsUzn_vKXm7v3tdPdL1y8PT6nZNHcc6U6e0VL5CH1SFCI0S2lUe21YKr7UKzGtQqlWhBg7oreMsoHZSS8Gbpvb8lFzOvWX7Ywopm-0wxb5MGqYZKgE1kyXF5pSLQ0oxtGaM3d7GT4NgvpWYWYkpSsyPEiMKxGcolXC_CfGv-h_qC9C-YoA</recordid><startdate>20230701</startdate><enddate>20230701</enddate><creator>Chaudhari, Rachana</creator><creator>Agarwal, Dhwani</creator><creator>Ravishankar, Kritika</creator><creator>Masand, Nikita</creator><creator>Sambhe, Vijay K.</creator><creator>Udmale, Sandeep S.</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><orcidid>https://orcid.org/0000-0001-9498-1230</orcidid></search><sort><creationdate>20230701</creationdate><title>Multi-output incremental back-propagation</title><author>Chaudhari, Rachana ; Agarwal, Dhwani ; Ravishankar, Kritika ; Masand, Nikita ; Sambhe, Vijay K. ; Udmale, Sandeep S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-c7857d61de76110b748c6d1ff54d887e2d8077f7e90301dac32e18c58543bb9d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial Intelligence</topic><topic>Back propagation</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Deep learning</topic><topic>Image Processing and Computer Vision</topic><topic>Original Article</topic><topic>Principal components analysis</topic><topic>Probability and Statistics in Computer Science</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chaudhari, Rachana</creatorcontrib><creatorcontrib>Agarwal, Dhwani</creatorcontrib><creatorcontrib>Ravishankar, Kritika</creatorcontrib><creatorcontrib>Masand, Nikita</creatorcontrib><creatorcontrib>Sambhe, Vijay K.</creatorcontrib><creatorcontrib>Udmale, Sandeep S.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Neural computing &amp; applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chaudhari, Rachana</au><au>Agarwal, Dhwani</au><au>Ravishankar, Kritika</au><au>Masand, Nikita</au><au>Sambhe, Vijay K.</au><au>Udmale, Sandeep S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-output incremental back-propagation</atitle><jtitle>Neural computing &amp; applications</jtitle><stitle>Neural Comput &amp; Applic</stitle><date>2023-07-01</date><risdate>2023</risdate><volume>35</volume><issue>20</issue><spage>14897</spage><epage>14910</epage><pages>14897-14910</pages><issn>0941-0643</issn><eissn>1433-3058</eissn><abstract>Deep learning techniques can form generalized models that can solve any problem that is not solvable by traditional approaches. It explains the omnipresence of deep learning models across all domains. However, a lot of time is spent on finding the optimal hyperparameters to help the model generalize and give the highest accuracy. This paper investigates a proposed model incorporating hybrid layers and a novel approach for weight initialization aimed at—(1) Reducing the overall trial and error time spent in finding the optimal number of layers by providing the necessary insights. (2) Reducing the randomness in weight initialization with the help of a novel incremental backpropagation based model architecture. The model, along with the principal component analysis-based initialization, substantially provides a stable weight initialization, thereby improving the train and test performance and speeding up the process of convergence to an optimal solution. Furthermore, three data sets were tested on the proposed approach, and they outperformed the state-of-the-art initialization methods.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00521-023-08490-4</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-9498-1230</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0941-0643
ispartof Neural computing & applications, 2023-07, Vol.35 (20), p.14897-14910
issn 0941-0643
1433-3058
language eng
recordid cdi_proquest_journals_2821740925
source SpringerLink Journals - AutoHoldings
subjects Artificial Intelligence
Back propagation
Computational Biology/Bioinformatics
Computational Science and Engineering
Computer Science
Data Mining and Knowledge Discovery
Deep learning
Image Processing and Computer Vision
Original Article
Principal components analysis
Probability and Statistics in Computer Science
title Multi-output incremental back-propagation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T21%3A06%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-output%20incremental%20back-propagation&rft.jtitle=Neural%20computing%20&%20applications&rft.au=Chaudhari,%20Rachana&rft.date=2023-07-01&rft.volume=35&rft.issue=20&rft.spage=14897&rft.epage=14910&rft.pages=14897-14910&rft.issn=0941-0643&rft.eissn=1433-3058&rft_id=info:doi/10.1007/s00521-023-08490-4&rft_dat=%3Cproquest_cross%3E2821740925%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2821740925&rft_id=info:pmid/&rfr_iscdi=true