Feature Correlation Hypergraph: Exploiting High-order Potentials for Multimodal Recognition
In computer vision and multimedia analysis, it is common to use multiple features (or multimodal features) to represent an object. For example, to well characterize a natural scene image, we typically extract a set of visual features to represent its color, texture, and shape. However, it is challen...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on cybernetics 2014-08, Vol.44 (8), p.1408-1419 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1419 |
---|---|
container_issue | 8 |
container_start_page | 1408 |
container_title | IEEE transactions on cybernetics |
container_volume | 44 |
creator | Zhang, Luming Gao, Yue Hong, Chaoqun Feng, Yinfu Zhu, Jianke Cai, Deng |
description | In computer vision and multimedia analysis, it is common to use multiple features (or multimodal features) to represent an object. For example, to well characterize a natural scene image, we typically extract a set of visual features to represent its color, texture, and shape. However, it is challenging to integrate multimodal features optimally. Since they are usually high-order correlated, e.g., the histogram of gradient (HOG), bag of scale invariant feature transform descriptors, and wavelets are closely related because they collaboratively reflect the image texture. Nevertheless, the existing algorithms fail to capture the high-order correlation among multimodal features. To solve this problem, we present a new multimodal feature integration framework. Particularly, we first define a new measure to capture the high-order correlation among the multimodal features, which can be deemed as a direct extension of the previous binary correlation. Therefore, we construct a feature correlation hypergraph (FCH) to model the high-order relations among multimodal features. Finally, a clustering algorithm is performed on FCH to group the original multimodal features into a set of partitions. Moreover, a multiclass boosting strategy is developed to obtain a strong classifier by combining the weak classifiers learned from each partition. The experimental results on seven popular datasets show the effectiveness of our approach. |
doi_str_mv | 10.1109/TCYB.2013.2285219 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_1547228987</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6650064</ieee_id><sourcerecordid>1559689052</sourcerecordid><originalsourceid>FETCH-LOGICAL-c382t-1bd6a442ce5b4eab0632c4c008fe34750f768a8ba8ccdac9fd6a52e0e9d0c0bd3</originalsourceid><addsrcrecordid>eNqNkUFLwzAUx4Mobug-gAhS8OKlM0mTNvGmYzpBUWQexENJ09cto2tq2oL79qZs7uDJXBKS3_vzXn4InRE8JgTL6_nk425MMYnGlApOiTxAQ0piEVKa8MP9OU4GaNQ0K-yX8FdSHKMBZUSwROIh-rwH1XYOgol1DkrVGlsFs00NbuFUvbwJpt91aU1rqkUwM4tlaF0OLni1LVStUWUTFNYFz13ZmrXNVRm8gbaLyvQ5p-io8ASMdvsJer-fziez8Onl4XFy-xTqSNA2JFkeK8aoBp4xUBmOI6qZ9u0WELGE4yKJhRKZElrnSsvC45wCBpljjbM8OkFX29za2a8OmjZdm0ZDWaoKbNekhHPpB8ec_gNlgkgmZeLRyz_oynau8oP0VOI_XYqeIltKO9s0Doq0dmat3CYlOO09pb2ntPeU7jz5motdcpetId9X_FrxwPkWMACwf45jjnHMoh9giZcm</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1547228987</pqid></control><display><type>article</type><title>Feature Correlation Hypergraph: Exploiting High-order Potentials for Multimodal Recognition</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Luming ; Gao, Yue ; Hong, Chaoqun ; Feng, Yinfu ; Zhu, Jianke ; Cai, Deng</creator><creatorcontrib>Zhang, Luming ; Gao, Yue ; Hong, Chaoqun ; Feng, Yinfu ; Zhu, Jianke ; Cai, Deng</creatorcontrib><description>In computer vision and multimedia analysis, it is common to use multiple features (or multimodal features) to represent an object. For example, to well characterize a natural scene image, we typically extract a set of visual features to represent its color, texture, and shape. However, it is challenging to integrate multimodal features optimally. Since they are usually high-order correlated, e.g., the histogram of gradient (HOG), bag of scale invariant feature transform descriptors, and wavelets are closely related because they collaboratively reflect the image texture. Nevertheless, the existing algorithms fail to capture the high-order correlation among multimodal features. To solve this problem, we present a new multimodal feature integration framework. Particularly, we first define a new measure to capture the high-order correlation among the multimodal features, which can be deemed as a direct extension of the previous binary correlation. Therefore, we construct a feature correlation hypergraph (FCH) to model the high-order relations among multimodal features. Finally, a clustering algorithm is performed on FCH to group the original multimodal features into a set of partitions. Moreover, a multiclass boosting strategy is developed to obtain a strong classifier by combining the weak classifiers learned from each partition. The experimental results on seven popular datasets show the effectiveness of our approach.</description><identifier>ISSN: 2168-2267</identifier><identifier>EISSN: 2168-2275</identifier><identifier>DOI: 10.1109/TCYB.2013.2285219</identifier><identifier>PMID: 24184790</identifier><identifier>CODEN: ITCEB8</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Boosting ; Classifiers ; Correlation ; Correlation analysis ; Entropy ; Feature correlation hypergraph ; high-order relations ; Joints ; Kernel ; multimodal features ; Partitions ; Support vector machines ; Surface layer ; Texture ; Vectors</subject><ispartof>IEEE transactions on cybernetics, 2014-08, Vol.44 (8), p.1408-1419</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Aug 2014</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c382t-1bd6a442ce5b4eab0632c4c008fe34750f768a8ba8ccdac9fd6a52e0e9d0c0bd3</citedby><cites>FETCH-LOGICAL-c382t-1bd6a442ce5b4eab0632c4c008fe34750f768a8ba8ccdac9fd6a52e0e9d0c0bd3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6650064$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,778,782,794,27911,27912,54745</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6650064$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/24184790$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Luming</creatorcontrib><creatorcontrib>Gao, Yue</creatorcontrib><creatorcontrib>Hong, Chaoqun</creatorcontrib><creatorcontrib>Feng, Yinfu</creatorcontrib><creatorcontrib>Zhu, Jianke</creatorcontrib><creatorcontrib>Cai, Deng</creatorcontrib><title>Feature Correlation Hypergraph: Exploiting High-order Potentials for Multimodal Recognition</title><title>IEEE transactions on cybernetics</title><addtitle>TCYB</addtitle><addtitle>IEEE Trans Cybern</addtitle><description>In computer vision and multimedia analysis, it is common to use multiple features (or multimodal features) to represent an object. For example, to well characterize a natural scene image, we typically extract a set of visual features to represent its color, texture, and shape. However, it is challenging to integrate multimodal features optimally. Since they are usually high-order correlated, e.g., the histogram of gradient (HOG), bag of scale invariant feature transform descriptors, and wavelets are closely related because they collaboratively reflect the image texture. Nevertheless, the existing algorithms fail to capture the high-order correlation among multimodal features. To solve this problem, we present a new multimodal feature integration framework. Particularly, we first define a new measure to capture the high-order correlation among the multimodal features, which can be deemed as a direct extension of the previous binary correlation. Therefore, we construct a feature correlation hypergraph (FCH) to model the high-order relations among multimodal features. Finally, a clustering algorithm is performed on FCH to group the original multimodal features into a set of partitions. Moreover, a multiclass boosting strategy is developed to obtain a strong classifier by combining the weak classifiers learned from each partition. The experimental results on seven popular datasets show the effectiveness of our approach.</description><subject>Algorithms</subject><subject>Boosting</subject><subject>Classifiers</subject><subject>Correlation</subject><subject>Correlation analysis</subject><subject>Entropy</subject><subject>Feature correlation hypergraph</subject><subject>high-order relations</subject><subject>Joints</subject><subject>Kernel</subject><subject>multimodal features</subject><subject>Partitions</subject><subject>Support vector machines</subject><subject>Surface layer</subject><subject>Texture</subject><subject>Vectors</subject><issn>2168-2267</issn><issn>2168-2275</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2014</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNqNkUFLwzAUx4Mobug-gAhS8OKlM0mTNvGmYzpBUWQexENJ09cto2tq2oL79qZs7uDJXBKS3_vzXn4InRE8JgTL6_nk425MMYnGlApOiTxAQ0piEVKa8MP9OU4GaNQ0K-yX8FdSHKMBZUSwROIh-rwH1XYOgol1DkrVGlsFs00NbuFUvbwJpt91aU1rqkUwM4tlaF0OLni1LVStUWUTFNYFz13ZmrXNVRm8gbaLyvQ5p-io8ASMdvsJer-fziez8Onl4XFy-xTqSNA2JFkeK8aoBp4xUBmOI6qZ9u0WELGE4yKJhRKZElrnSsvC45wCBpljjbM8OkFX29za2a8OmjZdm0ZDWaoKbNekhHPpB8ec_gNlgkgmZeLRyz_oynau8oP0VOI_XYqeIltKO9s0Doq0dmat3CYlOO09pb2ntPeU7jz5motdcpetId9X_FrxwPkWMACwf45jjnHMoh9giZcm</recordid><startdate>20140801</startdate><enddate>20140801</enddate><creator>Zhang, Luming</creator><creator>Gao, Yue</creator><creator>Hong, Chaoqun</creator><creator>Feng, Yinfu</creator><creator>Zhu, Jianke</creator><creator>Cai, Deng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope></search><sort><creationdate>20140801</creationdate><title>Feature Correlation Hypergraph: Exploiting High-order Potentials for Multimodal Recognition</title><author>Zhang, Luming ; Gao, Yue ; Hong, Chaoqun ; Feng, Yinfu ; Zhu, Jianke ; Cai, Deng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c382t-1bd6a442ce5b4eab0632c4c008fe34750f768a8ba8ccdac9fd6a52e0e9d0c0bd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2014</creationdate><topic>Algorithms</topic><topic>Boosting</topic><topic>Classifiers</topic><topic>Correlation</topic><topic>Correlation analysis</topic><topic>Entropy</topic><topic>Feature correlation hypergraph</topic><topic>high-order relations</topic><topic>Joints</topic><topic>Kernel</topic><topic>multimodal features</topic><topic>Partitions</topic><topic>Support vector machines</topic><topic>Surface layer</topic><topic>Texture</topic><topic>Vectors</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Luming</creatorcontrib><creatorcontrib>Gao, Yue</creatorcontrib><creatorcontrib>Hong, Chaoqun</creatorcontrib><creatorcontrib>Feng, Yinfu</creatorcontrib><creatorcontrib>Zhu, Jianke</creatorcontrib><creatorcontrib>Cai, Deng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on cybernetics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Luming</au><au>Gao, Yue</au><au>Hong, Chaoqun</au><au>Feng, Yinfu</au><au>Zhu, Jianke</au><au>Cai, Deng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Feature Correlation Hypergraph: Exploiting High-order Potentials for Multimodal Recognition</atitle><jtitle>IEEE transactions on cybernetics</jtitle><stitle>TCYB</stitle><addtitle>IEEE Trans Cybern</addtitle><date>2014-08-01</date><risdate>2014</risdate><volume>44</volume><issue>8</issue><spage>1408</spage><epage>1419</epage><pages>1408-1419</pages><issn>2168-2267</issn><eissn>2168-2275</eissn><coden>ITCEB8</coden><abstract>In computer vision and multimedia analysis, it is common to use multiple features (or multimodal features) to represent an object. For example, to well characterize a natural scene image, we typically extract a set of visual features to represent its color, texture, and shape. However, it is challenging to integrate multimodal features optimally. Since they are usually high-order correlated, e.g., the histogram of gradient (HOG), bag of scale invariant feature transform descriptors, and wavelets are closely related because they collaboratively reflect the image texture. Nevertheless, the existing algorithms fail to capture the high-order correlation among multimodal features. To solve this problem, we present a new multimodal feature integration framework. Particularly, we first define a new measure to capture the high-order correlation among the multimodal features, which can be deemed as a direct extension of the previous binary correlation. Therefore, we construct a feature correlation hypergraph (FCH) to model the high-order relations among multimodal features. Finally, a clustering algorithm is performed on FCH to group the original multimodal features into a set of partitions. Moreover, a multiclass boosting strategy is developed to obtain a strong classifier by combining the weak classifiers learned from each partition. The experimental results on seven popular datasets show the effectiveness of our approach.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>24184790</pmid><doi>10.1109/TCYB.2013.2285219</doi><tpages>12</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2168-2267 |
ispartof | IEEE transactions on cybernetics, 2014-08, Vol.44 (8), p.1408-1419 |
issn | 2168-2267 2168-2275 |
language | eng |
recordid | cdi_proquest_journals_1547228987 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Boosting Classifiers Correlation Correlation analysis Entropy Feature correlation hypergraph high-order relations Joints Kernel multimodal features Partitions Support vector machines Surface layer Texture Vectors |
title | Feature Correlation Hypergraph: Exploiting High-order Potentials for Multimodal Recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T05%3A53%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Feature%20Correlation%20Hypergraph:%20Exploiting%20High-order%20Potentials%20for%20Multimodal%20Recognition&rft.jtitle=IEEE%20transactions%20on%20cybernetics&rft.au=Zhang,%20Luming&rft.date=2014-08-01&rft.volume=44&rft.issue=8&rft.spage=1408&rft.epage=1419&rft.pages=1408-1419&rft.issn=2168-2267&rft.eissn=2168-2275&rft.coden=ITCEB8&rft_id=info:doi/10.1109/TCYB.2013.2285219&rft_dat=%3Cproquest_RIE%3E1559689052%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1547228987&rft_id=info:pmid/24184790&rft_ieee_id=6650064&rfr_iscdi=true |