A feature binding model in computer vision for object detection
In this paper, the authors propose the “Feature Binding (FB)” strategy in computer vision, a method combined with the biological visual perception theory. Based on feature subspace, the proposed method refers to the biological model and binds features according to certain rules. All features bound i...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2021-05, Vol.80 (13), p.19377-19397 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 19397 |
---|---|
container_issue | 13 |
container_start_page | 19377 |
container_title | Multimedia tools and applications |
container_volume | 80 |
creator | Jin, Jing Zhu, Aichun Wang, Yuanqing Wright, James |
description | In this paper, the authors propose the “Feature Binding (FB)” strategy in computer vision, a method combined with the biological visual perception theory. Based on feature subspace, the proposed method refers to the biological model and binds features according to certain rules. All features bound in a group are taken as a whole. Besides, all groups with different weight coefficients according to different importance are used to determine the object and its location. The position of the object can be determined based on the calculation according to the corresponding criteria. Feature Binding can significantly enhance the accuracy of object detection and localization. Moreover, the method can accelerate object detection and resist external interference in the unbound feature subspace. Feature Binding has good accuracy not only for the whole object but also for the obscured object. It also has good robustness for different algorithms, which are based on features, including traditional methods and deep learning algorithms. The object positioning system can detect the partially occluded objects more accurately in practice. |
doi_str_mv | 10.1007/s11042-021-10702-9 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2530262046</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2530262046</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-db02a542f32f5a5c210d44fcd5e8771c4ae72abf4eac284cac4f16cf0dab676e3</originalsourceid><addsrcrecordid>eNp9kE9LAzEQxYMoWKtfwFPAc3RmNtm0JyniPyh40XPIZpOypd3UZFfw2xtdwZun9xjeewM_xi4RrhFA32REkCSAUCBoILE8YjNUuhJaEx4XXy1AaAV4ys5y3gJgrUjO2O2KB2-HMXnedH3b9Ru-j63f8a7nLu4P4-AT_-hyF3seYuKx2Xo38NYPRcrxnJ0Eu8v-4lfn7O3h_vXuSaxfHp_vVmvhKlwOom2ArJIUKgrKKkcIrZTBtcovtEYnrddkmyC9dbSQzjoZsHYBWtvUuvbVnF1Nu4cU30efB7ONY-rLS0OqAqoJZF1SNKVcijknH8whdXubPg2C-QZlJlCmgDI_oMyylKqplEu43_j0N_1P6wu4HGto</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2530262046</pqid></control><display><type>article</type><title>A feature binding model in computer vision for object detection</title><source>SpringerLink Journals - AutoHoldings</source><creator>Jin, Jing ; Zhu, Aichun ; Wang, Yuanqing ; Wright, James</creator><creatorcontrib>Jin, Jing ; Zhu, Aichun ; Wang, Yuanqing ; Wright, James</creatorcontrib><description>In this paper, the authors propose the “Feature Binding (FB)” strategy in computer vision, a method combined with the biological visual perception theory. Based on feature subspace, the proposed method refers to the biological model and binds features according to certain rules. All features bound in a group are taken as a whole. Besides, all groups with different weight coefficients according to different importance are used to determine the object and its location. The position of the object can be determined based on the calculation according to the corresponding criteria. Feature Binding can significantly enhance the accuracy of object detection and localization. Moreover, the method can accelerate object detection and resist external interference in the unbound feature subspace. Feature Binding has good accuracy not only for the whole object but also for the obscured object. It also has good robustness for different algorithms, which are based on features, including traditional methods and deep learning algorithms. The object positioning system can detect the partially occluded objects more accurately in practice.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-021-10702-9</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Algorithms ; Binding ; Biological models (mathematics) ; Computer Communication Networks ; Computer Science ; Computer vision ; Data Structures and Information Theory ; Machine learning ; Multimedia Information Systems ; Object recognition ; Position (location) ; Special Purpose and Application-Based Systems ; Visual perception</subject><ispartof>Multimedia tools and applications, 2021-05, Vol.80 (13), p.19377-19397</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-db02a542f32f5a5c210d44fcd5e8771c4ae72abf4eac284cac4f16cf0dab676e3</citedby><cites>FETCH-LOGICAL-c319t-db02a542f32f5a5c210d44fcd5e8771c4ae72abf4eac284cac4f16cf0dab676e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-021-10702-9$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-021-10702-9$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,41487,42556,51318</link.rule.ids></links><search><creatorcontrib>Jin, Jing</creatorcontrib><creatorcontrib>Zhu, Aichun</creatorcontrib><creatorcontrib>Wang, Yuanqing</creatorcontrib><creatorcontrib>Wright, James</creatorcontrib><title>A feature binding model in computer vision for object detection</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>In this paper, the authors propose the “Feature Binding (FB)” strategy in computer vision, a method combined with the biological visual perception theory. Based on feature subspace, the proposed method refers to the biological model and binds features according to certain rules. All features bound in a group are taken as a whole. Besides, all groups with different weight coefficients according to different importance are used to determine the object and its location. The position of the object can be determined based on the calculation according to the corresponding criteria. Feature Binding can significantly enhance the accuracy of object detection and localization. Moreover, the method can accelerate object detection and resist external interference in the unbound feature subspace. Feature Binding has good accuracy not only for the whole object but also for the obscured object. It also has good robustness for different algorithms, which are based on features, including traditional methods and deep learning algorithms. The object positioning system can detect the partially occluded objects more accurately in practice.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Binding</subject><subject>Biological models (mathematics)</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Computer vision</subject><subject>Data Structures and Information Theory</subject><subject>Machine learning</subject><subject>Multimedia Information Systems</subject><subject>Object recognition</subject><subject>Position (location)</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Visual perception</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kE9LAzEQxYMoWKtfwFPAc3RmNtm0JyniPyh40XPIZpOypd3UZFfw2xtdwZun9xjeewM_xi4RrhFA32REkCSAUCBoILE8YjNUuhJaEx4XXy1AaAV4ys5y3gJgrUjO2O2KB2-HMXnedH3b9Ru-j63f8a7nLu4P4-AT_-hyF3seYuKx2Xo38NYPRcrxnJ0Eu8v-4lfn7O3h_vXuSaxfHp_vVmvhKlwOom2ArJIUKgrKKkcIrZTBtcovtEYnrddkmyC9dbSQzjoZsHYBWtvUuvbVnF1Nu4cU30efB7ONY-rLS0OqAqoJZF1SNKVcijknH8whdXubPg2C-QZlJlCmgDI_oMyylKqplEu43_j0N_1P6wu4HGto</recordid><startdate>20210501</startdate><enddate>20210501</enddate><creator>Jin, Jing</creator><creator>Zhu, Aichun</creator><creator>Wang, Yuanqing</creator><creator>Wright, James</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope></search><sort><creationdate>20210501</creationdate><title>A feature binding model in computer vision for object detection</title><author>Jin, Jing ; Zhu, Aichun ; Wang, Yuanqing ; Wright, James</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-db02a542f32f5a5c210d44fcd5e8771c4ae72abf4eac284cac4f16cf0dab676e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Binding</topic><topic>Biological models (mathematics)</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Computer vision</topic><topic>Data Structures and Information Theory</topic><topic>Machine learning</topic><topic>Multimedia Information Systems</topic><topic>Object recognition</topic><topic>Position (location)</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Visual perception</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jin, Jing</creatorcontrib><creatorcontrib>Zhu, Aichun</creatorcontrib><creatorcontrib>Wang, Yuanqing</creatorcontrib><creatorcontrib>Wright, James</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jin, Jing</au><au>Zhu, Aichun</au><au>Wang, Yuanqing</au><au>Wright, James</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A feature binding model in computer vision for object detection</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2021-05-01</date><risdate>2021</risdate><volume>80</volume><issue>13</issue><spage>19377</spage><epage>19397</epage><pages>19377-19397</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>In this paper, the authors propose the “Feature Binding (FB)” strategy in computer vision, a method combined with the biological visual perception theory. Based on feature subspace, the proposed method refers to the biological model and binds features according to certain rules. All features bound in a group are taken as a whole. Besides, all groups with different weight coefficients according to different importance are used to determine the object and its location. The position of the object can be determined based on the calculation according to the corresponding criteria. Feature Binding can significantly enhance the accuracy of object detection and localization. Moreover, the method can accelerate object detection and resist external interference in the unbound feature subspace. Feature Binding has good accuracy not only for the whole object but also for the obscured object. It also has good robustness for different algorithms, which are based on features, including traditional methods and deep learning algorithms. The object positioning system can detect the partially occluded objects more accurately in practice.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-021-10702-9</doi><tpages>21</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1380-7501 |
ispartof | Multimedia tools and applications, 2021-05, Vol.80 (13), p.19377-19397 |
issn | 1380-7501 1573-7721 |
language | eng |
recordid | cdi_proquest_journals_2530262046 |
source | SpringerLink Journals - AutoHoldings |
subjects | Accuracy Algorithms Binding Biological models (mathematics) Computer Communication Networks Computer Science Computer vision Data Structures and Information Theory Machine learning Multimedia Information Systems Object recognition Position (location) Special Purpose and Application-Based Systems Visual perception |
title | A feature binding model in computer vision for object detection |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T17%3A52%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20feature%20binding%20model%20in%20computer%20vision%20for%20object%20detection&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Jin,%20Jing&rft.date=2021-05-01&rft.volume=80&rft.issue=13&rft.spage=19377&rft.epage=19397&rft.pages=19377-19397&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-021-10702-9&rft_dat=%3Cproquest_cross%3E2530262046%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2530262046&rft_id=info:pmid/&rfr_iscdi=true |