A deep neural network and classical features based scheme for objects recognition: an application for machine inspection
Computer Vision (CV) domain is widely used in the current era of automation and visual surveillance for the detection and classification of different objects in a diverse environment. The automatic machine inspection of different objects in the scenes is based on internal and external parameters lik...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2024-02, Vol.83 (5), p.14935-14957 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 14957 |
---|---|
container_issue | 5 |
container_start_page | 14935 |
container_title | Multimedia tools and applications |
container_volume | 83 |
creator | Hussain, Nazar Khan, Muhammad Attique Sharif, Muhammad Khan, Sajid Ali Albesher, Abdulaziz A. Saba, Tanzila Armaghan, Ammar |
description | Computer Vision (CV) domain is widely used in the current era of automation and visual surveillance for the detection and classification of different objects in a diverse environment. The automatic machine inspection of different objects in the scenes is based on internal and external parameters like features that provide a huge amount of information related to the nature of an object in the scene. In this work, we propose a new automated method based on classical and deep learning feature selection. The proposed object classification method follows three steps. The data augmentation is performed in the first step to make the balance database. Later, Pyramid HOG (PHOG) and Central Symmetric LBP (CS-LBP) features are serially fused along with deep learning-based extracted features. The deep learning features are extracted from the pre-trained CNN model name Inception V3. In the third step, a new technique name Joint Entropy along with KNN (JEKNN) is employed to select the best features. The best-selected features are finally classified by well-known supervised learning methods and choose the best one based on higher accuracy. The proposed method is evaluated on Caltech101 balanced dataset and achieved maximum accuracy of 90.4% on Ensemble classifier which outperforms as compare to existing techniques. |
doi_str_mv | 10.1007/s11042-020-08852-3 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2918768050</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2918768050</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-6cb070a135bfd8e9454de373294bcfb9bf4cbffc75335d60de3ad6f317e20f0a3</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRSMEEqXwA6wssQ6M7ThO2FUVL6kSG1hbtjNuU9Ik2ImAv8dtkGDFal733JFuklxSuKYA8iZQChlLgUEKRSFYyo-SGRWSp1IyevynP03OQtgC0FywbJZ8LkiF2JMWR6-bWIaPzr8R3VbENjqE2satQz2MHgMxOmBFgt3gDonrPOnMFu0QiEfbrdt6qLv2NsJE930T0f180O203dQtkroNfQTi-jw5cboJePFT58nr_d3L8jFdPT88LRer1HJaDmluDUjQlAvjqgLLTGQVcslZmRnrTGlcZo1zVgrORZVDPOoqd5xKZOBA83lyNfn2vnsfMQxq242-jS8VK2kh8wIERBWbVNZ3IXh0qvf1TvsvRUHtE1ZTwiomrA4JKx4hPkEhits1-l_rf6hvCDGA1g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918768050</pqid></control><display><type>article</type><title>A deep neural network and classical features based scheme for objects recognition: an application for machine inspection</title><source>SpringerLink Journals - AutoHoldings</source><creator>Hussain, Nazar ; Khan, Muhammad Attique ; Sharif, Muhammad ; Khan, Sajid Ali ; Albesher, Abdulaziz A. ; Saba, Tanzila ; Armaghan, Ammar</creator><creatorcontrib>Hussain, Nazar ; Khan, Muhammad Attique ; Sharif, Muhammad ; Khan, Sajid Ali ; Albesher, Abdulaziz A. ; Saba, Tanzila ; Armaghan, Ammar</creatorcontrib><description>Computer Vision (CV) domain is widely used in the current era of automation and visual surveillance for the detection and classification of different objects in a diverse environment. The automatic machine inspection of different objects in the scenes is based on internal and external parameters like features that provide a huge amount of information related to the nature of an object in the scene. In this work, we propose a new automated method based on classical and deep learning feature selection. The proposed object classification method follows three steps. The data augmentation is performed in the first step to make the balance database. Later, Pyramid HOG (PHOG) and Central Symmetric LBP (CS-LBP) features are serially fused along with deep learning-based extracted features. The deep learning features are extracted from the pre-trained CNN model name Inception V3. In the third step, a new technique name Joint Entropy along with KNN (JEKNN) is employed to select the best features. The best-selected features are finally classified by well-known supervised learning methods and choose the best one based on higher accuracy. The proposed method is evaluated on Caltech101 balanced dataset and achieved maximum accuracy of 90.4% on Ensemble classifier which outperforms as compare to existing techniques.</description><identifier>ISSN: 1573-7721</identifier><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-020-08852-3</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial neural networks ; Automation ; Classification ; Computer Communication Networks ; Computer Science ; Computer vision ; Data augmentation ; Data Structures and Information Theory ; Deep learning ; Inspection ; Machine learning ; Multimedia Information Systems ; Object recognition ; Special Purpose and Application-Based Systems ; Supervised learning</subject><ispartof>Multimedia tools and applications, 2024-02, Vol.83 (5), p.14935-14957</ispartof><rights>Springer Science+Business Media, LLC, part of Springer Nature 2020</rights><rights>Springer Science+Business Media, LLC, part of Springer Nature 2020.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-6cb070a135bfd8e9454de373294bcfb9bf4cbffc75335d60de3ad6f317e20f0a3</citedby><cites>FETCH-LOGICAL-c319t-6cb070a135bfd8e9454de373294bcfb9bf4cbffc75335d60de3ad6f317e20f0a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-020-08852-3$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-020-08852-3$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27922,27923,41486,42555,51317</link.rule.ids></links><search><creatorcontrib>Hussain, Nazar</creatorcontrib><creatorcontrib>Khan, Muhammad Attique</creatorcontrib><creatorcontrib>Sharif, Muhammad</creatorcontrib><creatorcontrib>Khan, Sajid Ali</creatorcontrib><creatorcontrib>Albesher, Abdulaziz A.</creatorcontrib><creatorcontrib>Saba, Tanzila</creatorcontrib><creatorcontrib>Armaghan, Ammar</creatorcontrib><title>A deep neural network and classical features based scheme for objects recognition: an application for machine inspection</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Computer Vision (CV) domain is widely used in the current era of automation and visual surveillance for the detection and classification of different objects in a diverse environment. The automatic machine inspection of different objects in the scenes is based on internal and external parameters like features that provide a huge amount of information related to the nature of an object in the scene. In this work, we propose a new automated method based on classical and deep learning feature selection. The proposed object classification method follows three steps. The data augmentation is performed in the first step to make the balance database. Later, Pyramid HOG (PHOG) and Central Symmetric LBP (CS-LBP) features are serially fused along with deep learning-based extracted features. The deep learning features are extracted from the pre-trained CNN model name Inception V3. In the third step, a new technique name Joint Entropy along with KNN (JEKNN) is employed to select the best features. The best-selected features are finally classified by well-known supervised learning methods and choose the best one based on higher accuracy. The proposed method is evaluated on Caltech101 balanced dataset and achieved maximum accuracy of 90.4% on Ensemble classifier which outperforms as compare to existing techniques.</description><subject>Artificial neural networks</subject><subject>Automation</subject><subject>Classification</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Computer vision</subject><subject>Data augmentation</subject><subject>Data Structures and Information Theory</subject><subject>Deep learning</subject><subject>Inspection</subject><subject>Machine learning</subject><subject>Multimedia Information Systems</subject><subject>Object recognition</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Supervised learning</subject><issn>1573-7721</issn><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kMtOwzAQRSMEEqXwA6wssQ6M7ThO2FUVL6kSG1hbtjNuU9Ik2ImAv8dtkGDFal733JFuklxSuKYA8iZQChlLgUEKRSFYyo-SGRWSp1IyevynP03OQtgC0FywbJZ8LkiF2JMWR6-bWIaPzr8R3VbENjqE2satQz2MHgMxOmBFgt3gDonrPOnMFu0QiEfbrdt6qLv2NsJE930T0f180O203dQtkroNfQTi-jw5cboJePFT58nr_d3L8jFdPT88LRer1HJaDmluDUjQlAvjqgLLTGQVcslZmRnrTGlcZo1zVgrORZVDPOoqd5xKZOBA83lyNfn2vnsfMQxq242-jS8VK2kh8wIERBWbVNZ3IXh0qvf1TvsvRUHtE1ZTwiomrA4JKx4hPkEhits1-l_rf6hvCDGA1g</recordid><startdate>20240201</startdate><enddate>20240201</enddate><creator>Hussain, Nazar</creator><creator>Khan, Muhammad Attique</creator><creator>Sharif, Muhammad</creator><creator>Khan, Sajid Ali</creator><creator>Albesher, Abdulaziz A.</creator><creator>Saba, Tanzila</creator><creator>Armaghan, Ammar</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20240201</creationdate><title>A deep neural network and classical features based scheme for objects recognition: an application for machine inspection</title><author>Hussain, Nazar ; Khan, Muhammad Attique ; Sharif, Muhammad ; Khan, Sajid Ali ; Albesher, Abdulaziz A. ; Saba, Tanzila ; Armaghan, Ammar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-6cb070a135bfd8e9454de373294bcfb9bf4cbffc75335d60de3ad6f317e20f0a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial neural networks</topic><topic>Automation</topic><topic>Classification</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Computer vision</topic><topic>Data augmentation</topic><topic>Data Structures and Information Theory</topic><topic>Deep learning</topic><topic>Inspection</topic><topic>Machine learning</topic><topic>Multimedia Information Systems</topic><topic>Object recognition</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Supervised learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hussain, Nazar</creatorcontrib><creatorcontrib>Khan, Muhammad Attique</creatorcontrib><creatorcontrib>Sharif, Muhammad</creatorcontrib><creatorcontrib>Khan, Sajid Ali</creatorcontrib><creatorcontrib>Albesher, Abdulaziz A.</creatorcontrib><creatorcontrib>Saba, Tanzila</creatorcontrib><creatorcontrib>Armaghan, Ammar</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hussain, Nazar</au><au>Khan, Muhammad Attique</au><au>Sharif, Muhammad</au><au>Khan, Sajid Ali</au><au>Albesher, Abdulaziz A.</au><au>Saba, Tanzila</au><au>Armaghan, Ammar</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A deep neural network and classical features based scheme for objects recognition: an application for machine inspection</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2024-02-01</date><risdate>2024</risdate><volume>83</volume><issue>5</issue><spage>14935</spage><epage>14957</epage><pages>14935-14957</pages><issn>1573-7721</issn><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Computer Vision (CV) domain is widely used in the current era of automation and visual surveillance for the detection and classification of different objects in a diverse environment. The automatic machine inspection of different objects in the scenes is based on internal and external parameters like features that provide a huge amount of information related to the nature of an object in the scene. In this work, we propose a new automated method based on classical and deep learning feature selection. The proposed object classification method follows three steps. The data augmentation is performed in the first step to make the balance database. Later, Pyramid HOG (PHOG) and Central Symmetric LBP (CS-LBP) features are serially fused along with deep learning-based extracted features. The deep learning features are extracted from the pre-trained CNN model name Inception V3. In the third step, a new technique name Joint Entropy along with KNN (JEKNN) is employed to select the best features. The best-selected features are finally classified by well-known supervised learning methods and choose the best one based on higher accuracy. The proposed method is evaluated on Caltech101 balanced dataset and achieved maximum accuracy of 90.4% on Ensemble classifier which outperforms as compare to existing techniques.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-020-08852-3</doi><tpages>23</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1573-7721 |
ispartof | Multimedia tools and applications, 2024-02, Vol.83 (5), p.14935-14957 |
issn | 1573-7721 1380-7501 1573-7721 |
language | eng |
recordid | cdi_proquest_journals_2918768050 |
source | SpringerLink Journals - AutoHoldings |
subjects | Artificial neural networks Automation Classification Computer Communication Networks Computer Science Computer vision Data augmentation Data Structures and Information Theory Deep learning Inspection Machine learning Multimedia Information Systems Object recognition Special Purpose and Application-Based Systems Supervised learning |
title | A deep neural network and classical features based scheme for objects recognition: an application for machine inspection |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T08%3A59%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20deep%20neural%20network%20and%20classical%20features%20based%20scheme%20for%20objects%20recognition:%20an%20application%20for%20machine%20inspection&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Hussain,%20Nazar&rft.date=2024-02-01&rft.volume=83&rft.issue=5&rft.spage=14935&rft.epage=14957&rft.pages=14935-14957&rft.issn=1573-7721&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-020-08852-3&rft_dat=%3Cproquest_cross%3E2918768050%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918768050&rft_id=info:pmid/&rfr_iscdi=true |