Multiview Objects Recognition Using Deep Learning-Based Wrap-CNN with Voting Scheme
Industrial automation effectively reduces the human effort in various activities of the industry. In many autonomous systems, object recognition plays a vital role. Thus, finding a solution for the accurate recognition of objection for the autonomous system is motivated among researchers. In this se...
Gespeichert in:
Veröffentlicht in: | Neural processing letters 2022-06, Vol.54 (3), p.1495-1521 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1521 |
---|---|
container_issue | 3 |
container_start_page | 1495 |
container_title | Neural processing letters |
container_volume | 54 |
creator | Balamurugan, D. Aravinth, S. S. Reddy, P. Chandra Shaker Rupani, Ajay Manikandan, A. |
description | Industrial automation effectively reduces the human effort in various activities of the industry. In many autonomous systems, object recognition plays a vital role. Thus, finding a solution for the accurate recognition of objection for the autonomous system is motivated among researchers. In this sense, various techniques have been designed with the support of classifiers and machine learning techniques. But those techniques lack their performance in the case of Multiview object recognition. It is found that a single classifier or machine learning algorithm is not enough to recognize Multiview objects accurately. In this paper, a Wrap Convolutional Neural Network (Wrap-CNN) with a voting scheme is proposed to solve the Multiview object recognition problem and attain better recognition accuracy. The proposed model consists of three phases such as pre-processing, pre-training CNNs and voting schemes. The pre-processing phase is done to remove the unwanted noise. These pre-trained CNN models are used as feature extractors and classify the images into their respective classes. Here, the Wrap-CNN, nine pre-trained CNN are used in parallels, such as Alex Net, VGGNet, GoogLeNet, Inceptionv3, SqueezeNet, ResNet v2, Xception, MobileNetV2 and ShuffleNet. Finally, the output class from the nine predicted classes is chosen based voting scheme. The system was tested in two scenarios, such as images without rotation and with rotation. The overall accuracy is 99% and 93% for without rotation and with rotation recognition, respectively. Ultimately the system proves the effectiveness for the Multiview object recognition, which can be used for the industrial automation system. |
doi_str_mv | 10.1007/s11063-021-10679-4 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2918348163</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2918348163</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-9b98139a722657a7a0ae996ee52dd6e960b637f56344981d64637fa0b9ad35983</originalsourceid><addsrcrecordid>eNp9kE9LAzEUxIMoWKtfwFPAczR_dpPNUatVobZgrXoL2d3Xdku7uyapxW9v6grePL0Z-M08GITOGb1klKorzxiVglDOSBRKk-QA9ViqBFFKvB9GLRQlieTsGJ14v6I0xjjtoenTdh2qzwp2eJKvoAgeP0PRLOoqVE2NZ76qF_gWoMUjsK6OjtxYDyV-c7Ylg_EY76qwxK9N2IPTYgkbOEVHc7v2cPZ7-2g2vHsZPJDR5P5xcD0ihWA6EJ3rjAltFecyVVZZakFrCZDyspSgJc2lUPNUiiSJZCmTvbU017YUqc5EH110va1rPrbgg1k1W1fHl4ZrlokkY1JEindU4RrvHcxN66qNdV-GUbMfz3TjmTie-RnPJDEkupCPcL0A91f9T-obLklwVA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918348163</pqid></control><display><type>article</type><title>Multiview Objects Recognition Using Deep Learning-Based Wrap-CNN with Voting Scheme</title><source>SpringerLink Journals</source><source>ProQuest Central UK/Ireland</source><source>ProQuest Central</source><creator>Balamurugan, D. ; Aravinth, S. S. ; Reddy, P. Chandra Shaker ; Rupani, Ajay ; Manikandan, A.</creator><creatorcontrib>Balamurugan, D. ; Aravinth, S. S. ; Reddy, P. Chandra Shaker ; Rupani, Ajay ; Manikandan, A.</creatorcontrib><description>Industrial automation effectively reduces the human effort in various activities of the industry. In many autonomous systems, object recognition plays a vital role. Thus, finding a solution for the accurate recognition of objection for the autonomous system is motivated among researchers. In this sense, various techniques have been designed with the support of classifiers and machine learning techniques. But those techniques lack their performance in the case of Multiview object recognition. It is found that a single classifier or machine learning algorithm is not enough to recognize Multiview objects accurately. In this paper, a Wrap Convolutional Neural Network (Wrap-CNN) with a voting scheme is proposed to solve the Multiview object recognition problem and attain better recognition accuracy. The proposed model consists of three phases such as pre-processing, pre-training CNNs and voting schemes. The pre-processing phase is done to remove the unwanted noise. These pre-trained CNN models are used as feature extractors and classify the images into their respective classes. Here, the Wrap-CNN, nine pre-trained CNN are used in parallels, such as Alex Net, VGGNet, GoogLeNet, Inceptionv3, SqueezeNet, ResNet v2, Xception, MobileNetV2 and ShuffleNet. Finally, the output class from the nine predicted classes is chosen based voting scheme. The system was tested in two scenarios, such as images without rotation and with rotation. The overall accuracy is 99% and 93% for without rotation and with rotation recognition, respectively. Ultimately the system proves the effectiveness for the Multiview object recognition, which can be used for the industrial automation system.</description><identifier>ISSN: 1370-4621</identifier><identifier>EISSN: 1573-773X</identifier><identifier>DOI: 10.1007/s11063-021-10679-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Algorithms ; Artificial Intelligence ; Artificial neural networks ; Automation ; Classification ; Classifiers ; Complex Systems ; Computational Intelligence ; Computer Science ; Datasets ; Deep learning ; Feature extraction ; Image classification ; Image retrieval ; Machine learning ; Model accuracy ; Object recognition ; Performance evaluation ; Rotation ; Semantics</subject><ispartof>Neural processing letters, 2022-06, Vol.54 (3), p.1495-1521</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-9b98139a722657a7a0ae996ee52dd6e960b637f56344981d64637fa0b9ad35983</citedby><cites>FETCH-LOGICAL-c319t-9b98139a722657a7a0ae996ee52dd6e960b637f56344981d64637fa0b9ad35983</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11063-021-10679-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2918348163?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,778,782,21371,27907,27908,33727,41471,42540,43788,51302,64366,64370,72220</link.rule.ids></links><search><creatorcontrib>Balamurugan, D.</creatorcontrib><creatorcontrib>Aravinth, S. S.</creatorcontrib><creatorcontrib>Reddy, P. Chandra Shaker</creatorcontrib><creatorcontrib>Rupani, Ajay</creatorcontrib><creatorcontrib>Manikandan, A.</creatorcontrib><title>Multiview Objects Recognition Using Deep Learning-Based Wrap-CNN with Voting Scheme</title><title>Neural processing letters</title><addtitle>Neural Process Lett</addtitle><description>Industrial automation effectively reduces the human effort in various activities of the industry. In many autonomous systems, object recognition plays a vital role. Thus, finding a solution for the accurate recognition of objection for the autonomous system is motivated among researchers. In this sense, various techniques have been designed with the support of classifiers and machine learning techniques. But those techniques lack their performance in the case of Multiview object recognition. It is found that a single classifier or machine learning algorithm is not enough to recognize Multiview objects accurately. In this paper, a Wrap Convolutional Neural Network (Wrap-CNN) with a voting scheme is proposed to solve the Multiview object recognition problem and attain better recognition accuracy. The proposed model consists of three phases such as pre-processing, pre-training CNNs and voting schemes. The pre-processing phase is done to remove the unwanted noise. These pre-trained CNN models are used as feature extractors and classify the images into their respective classes. Here, the Wrap-CNN, nine pre-trained CNN are used in parallels, such as Alex Net, VGGNet, GoogLeNet, Inceptionv3, SqueezeNet, ResNet v2, Xception, MobileNetV2 and ShuffleNet. Finally, the output class from the nine predicted classes is chosen based voting scheme. The system was tested in two scenarios, such as images without rotation and with rotation. The overall accuracy is 99% and 93% for without rotation and with rotation recognition, respectively. Ultimately the system proves the effectiveness for the Multiview object recognition, which can be used for the industrial automation system.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Automation</subject><subject>Classification</subject><subject>Classifiers</subject><subject>Complex Systems</subject><subject>Computational Intelligence</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Image retrieval</subject><subject>Machine learning</subject><subject>Model accuracy</subject><subject>Object recognition</subject><subject>Performance evaluation</subject><subject>Rotation</subject><subject>Semantics</subject><issn>1370-4621</issn><issn>1573-773X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kE9LAzEUxIMoWKtfwFPAczR_dpPNUatVobZgrXoL2d3Xdku7uyapxW9v6grePL0Z-M08GITOGb1klKorzxiVglDOSBRKk-QA9ViqBFFKvB9GLRQlieTsGJ14v6I0xjjtoenTdh2qzwp2eJKvoAgeP0PRLOoqVE2NZ76qF_gWoMUjsK6OjtxYDyV-c7Ylg_EY76qwxK9N2IPTYgkbOEVHc7v2cPZ7-2g2vHsZPJDR5P5xcD0ihWA6EJ3rjAltFecyVVZZakFrCZDyspSgJc2lUPNUiiSJZCmTvbU017YUqc5EH110va1rPrbgg1k1W1fHl4ZrlokkY1JEindU4RrvHcxN66qNdV-GUbMfz3TjmTie-RnPJDEkupCPcL0A91f9T-obLklwVA</recordid><startdate>20220601</startdate><enddate>20220601</enddate><creator>Balamurugan, D.</creator><creator>Aravinth, S. S.</creator><creator>Reddy, P. Chandra Shaker</creator><creator>Rupani, Ajay</creator><creator>Manikandan, A.</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope></search><sort><creationdate>20220601</creationdate><title>Multiview Objects Recognition Using Deep Learning-Based Wrap-CNN with Voting Scheme</title><author>Balamurugan, D. ; Aravinth, S. S. ; Reddy, P. Chandra Shaker ; Rupani, Ajay ; Manikandan, A.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-9b98139a722657a7a0ae996ee52dd6e960b637f56344981d64637fa0b9ad35983</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Automation</topic><topic>Classification</topic><topic>Classifiers</topic><topic>Complex Systems</topic><topic>Computational Intelligence</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Image retrieval</topic><topic>Machine learning</topic><topic>Model accuracy</topic><topic>Object recognition</topic><topic>Performance evaluation</topic><topic>Rotation</topic><topic>Semantics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Balamurugan, D.</creatorcontrib><creatorcontrib>Aravinth, S. S.</creatorcontrib><creatorcontrib>Reddy, P. Chandra Shaker</creatorcontrib><creatorcontrib>Rupani, Ajay</creatorcontrib><creatorcontrib>Manikandan, A.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><jtitle>Neural processing letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Balamurugan, D.</au><au>Aravinth, S. S.</au><au>Reddy, P. Chandra Shaker</au><au>Rupani, Ajay</au><au>Manikandan, A.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multiview Objects Recognition Using Deep Learning-Based Wrap-CNN with Voting Scheme</atitle><jtitle>Neural processing letters</jtitle><stitle>Neural Process Lett</stitle><date>2022-06-01</date><risdate>2022</risdate><volume>54</volume><issue>3</issue><spage>1495</spage><epage>1521</epage><pages>1495-1521</pages><issn>1370-4621</issn><eissn>1573-773X</eissn><abstract>Industrial automation effectively reduces the human effort in various activities of the industry. In many autonomous systems, object recognition plays a vital role. Thus, finding a solution for the accurate recognition of objection for the autonomous system is motivated among researchers. In this sense, various techniques have been designed with the support of classifiers and machine learning techniques. But those techniques lack their performance in the case of Multiview object recognition. It is found that a single classifier or machine learning algorithm is not enough to recognize Multiview objects accurately. In this paper, a Wrap Convolutional Neural Network (Wrap-CNN) with a voting scheme is proposed to solve the Multiview object recognition problem and attain better recognition accuracy. The proposed model consists of three phases such as pre-processing, pre-training CNNs and voting schemes. The pre-processing phase is done to remove the unwanted noise. These pre-trained CNN models are used as feature extractors and classify the images into their respective classes. Here, the Wrap-CNN, nine pre-trained CNN are used in parallels, such as Alex Net, VGGNet, GoogLeNet, Inceptionv3, SqueezeNet, ResNet v2, Xception, MobileNetV2 and ShuffleNet. Finally, the output class from the nine predicted classes is chosen based voting scheme. The system was tested in two scenarios, such as images without rotation and with rotation. The overall accuracy is 99% and 93% for without rotation and with rotation recognition, respectively. Ultimately the system proves the effectiveness for the Multiview object recognition, which can be used for the industrial automation system.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11063-021-10679-4</doi><tpages>27</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1370-4621 |
ispartof | Neural processing letters, 2022-06, Vol.54 (3), p.1495-1521 |
issn | 1370-4621 1573-773X |
language | eng |
recordid | cdi_proquest_journals_2918348163 |
source | SpringerLink Journals; ProQuest Central UK/Ireland; ProQuest Central |
subjects | Accuracy Algorithms Artificial Intelligence Artificial neural networks Automation Classification Classifiers Complex Systems Computational Intelligence Computer Science Datasets Deep learning Feature extraction Image classification Image retrieval Machine learning Model accuracy Object recognition Performance evaluation Rotation Semantics |
title | Multiview Objects Recognition Using Deep Learning-Based Wrap-CNN with Voting Scheme |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T17%3A49%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multiview%20Objects%20Recognition%20Using%20Deep%20Learning-Based%20Wrap-CNN%20with%20Voting%20Scheme&rft.jtitle=Neural%20processing%20letters&rft.au=Balamurugan,%20D.&rft.date=2022-06-01&rft.volume=54&rft.issue=3&rft.spage=1495&rft.epage=1521&rft.pages=1495-1521&rft.issn=1370-4621&rft.eissn=1573-773X&rft_id=info:doi/10.1007/s11063-021-10679-4&rft_dat=%3Cproquest_cross%3E2918348163%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918348163&rft_id=info:pmid/&rfr_iscdi=true |