Consensus and complementarity based maximum entropy discrimination for multi-view classification
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on the maximum entropy and large margin principles, but it just uses only one view of the data not all the views of the data. Although Multi-view maximum entropy discrimination considered the multi-view i...
Gespeichert in:
Veröffentlicht in: | Information sciences 2016-11, Vol.367-368, p.296-310 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 310 |
---|---|
container_issue | |
container_start_page | 296 |
container_title | Information sciences |
container_volume | 367-368 |
creator | Chao, Guoqing Sun, Shiliang |
description | Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on the maximum entropy and large margin principles, but it just uses only one view of the data not all the views of the data. Although Multi-view maximum entropy discrimination considered the multi-view information of the data, it just respects the consensus principle. There are two common principles named consensus and complementarity in multi-view learning. We aim to take the full advantage of the multi-view information of the data for classification, and respect the two common principles consensus and complementarity simultaneously. In this paper, we propose a new method called consensus and complementarity based MED (MED-2C) for multi-view classification, which well utilizes the two principles consensus and complementarity for multi-view learning (MVL). We first transform data from two views into a common subspace and make the transformed data in the new subspace identical to respect the consensus principle. Then we augment the transformed data with their original features to take into account the complementarity principle. Built on the augmented features and meanwhile by relaxing the consensus principle, the objective function of MED-2C is naturally formed whose inner optimization recovers the traditional MED framework. We provide an instantiation of the MED-2C method and derive the corresponding solution. Experimental results on synthetic data and real-world data show the effectiveness of the proposed MED-2C. It not only performs better than three single-view classification methods but also generally outperforms three multi-view classification methods canonical correlation analysis (CCA), ensemble MED (EMED) and the multi-view variant SVM-2K of the classical support vector machine (SVM) algorithm. In addition, MED-2C performs better than or as well as state-of-the-art MVMED on all the data sets. |
doi_str_mv | 10.1016/j.ins.2016.06.004 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1835623703</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S002002551630411X</els_id><sourcerecordid>1835623703</sourcerecordid><originalsourceid>FETCH-LOGICAL-c330t-8ae5878be09be9640361f4982ca9c1d1c61c189b4b6b9653ee3243570c4f7ff43</originalsourceid><addsrcrecordid>eNp9UE1LxDAQDaLg-vEDvOXopXXStGmLJ1n8ggUveo5pOoUsTVKTVt1_b9b1LAzMMPPeY94j5IpBzoCJm21uXMyLNOaQCsojsmJNXWSiaNkxWQEUkEFRVafkLMYtJEQtxIq8r72L6OISqXI91d5OI1p0swpm3tFOReypVd_GLpamdfDTjvYm6mCscWo23tHBB2qXcTbZp8EvqkcVoxmM_r1ekJNBjREv__o5eXu4f10_ZZuXx-f13SbTnMOcNQqrpm46hLbDVpTABRvKtim0ajXrmRZMs6btyk50rag4Ii9KXtWgy6EehpKfk-uD7hT8x4JxljZ9ieOoHPolStbwShS8Bp6g7ADVwccYcJBTcqPCTjKQ-zTlVqY05T5NCalgL3974GDykGwGGbVBp7E3AfUse2_-Yf8AbA1_QQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1835623703</pqid></control><display><type>article</type><title>Consensus and complementarity based maximum entropy discrimination for multi-view classification</title><source>Access via ScienceDirect (Elsevier)</source><creator>Chao, Guoqing ; Sun, Shiliang</creator><creatorcontrib>Chao, Guoqing ; Sun, Shiliang</creatorcontrib><description>Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on the maximum entropy and large margin principles, but it just uses only one view of the data not all the views of the data. Although Multi-view maximum entropy discrimination considered the multi-view information of the data, it just respects the consensus principle. There are two common principles named consensus and complementarity in multi-view learning. We aim to take the full advantage of the multi-view information of the data for classification, and respect the two common principles consensus and complementarity simultaneously. In this paper, we propose a new method called consensus and complementarity based MED (MED-2C) for multi-view classification, which well utilizes the two principles consensus and complementarity for multi-view learning (MVL). We first transform data from two views into a common subspace and make the transformed data in the new subspace identical to respect the consensus principle. Then we augment the transformed data with their original features to take into account the complementarity principle. Built on the augmented features and meanwhile by relaxing the consensus principle, the objective function of MED-2C is naturally formed whose inner optimization recovers the traditional MED framework. We provide an instantiation of the MED-2C method and derive the corresponding solution. Experimental results on synthetic data and real-world data show the effectiveness of the proposed MED-2C. It not only performs better than three single-view classification methods but also generally outperforms three multi-view classification methods canonical correlation analysis (CCA), ensemble MED (EMED) and the multi-view variant SVM-2K of the classical support vector machine (SVM) algorithm. In addition, MED-2C performs better than or as well as state-of-the-art MVMED on all the data sets.</description><identifier>ISSN: 0020-0255</identifier><identifier>EISSN: 1872-6291</identifier><identifier>DOI: 10.1016/j.ins.2016.06.004</identifier><language>eng</language><publisher>Elsevier Inc</publisher><subject>Classification ; Discrimination ; Kernel method ; Large-margin ; Learning ; Maximum entropy ; Maximum entropy discrimination ; Multi-view learning ; Optimization ; Subspaces ; Support vector machines</subject><ispartof>Information sciences, 2016-11, Vol.367-368, p.296-310</ispartof><rights>2016 Elsevier Inc.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c330t-8ae5878be09be9640361f4982ca9c1d1c61c189b4b6b9653ee3243570c4f7ff43</citedby><cites>FETCH-LOGICAL-c330t-8ae5878be09be9640361f4982ca9c1d1c61c189b4b6b9653ee3243570c4f7ff43</cites><orcidid>0000-0002-2410-650X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.ins.2016.06.004$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids></links><search><creatorcontrib>Chao, Guoqing</creatorcontrib><creatorcontrib>Sun, Shiliang</creatorcontrib><title>Consensus and complementarity based maximum entropy discrimination for multi-view classification</title><title>Information sciences</title><description>Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on the maximum entropy and large margin principles, but it just uses only one view of the data not all the views of the data. Although Multi-view maximum entropy discrimination considered the multi-view information of the data, it just respects the consensus principle. There are two common principles named consensus and complementarity in multi-view learning. We aim to take the full advantage of the multi-view information of the data for classification, and respect the two common principles consensus and complementarity simultaneously. In this paper, we propose a new method called consensus and complementarity based MED (MED-2C) for multi-view classification, which well utilizes the two principles consensus and complementarity for multi-view learning (MVL). We first transform data from two views into a common subspace and make the transformed data in the new subspace identical to respect the consensus principle. Then we augment the transformed data with their original features to take into account the complementarity principle. Built on the augmented features and meanwhile by relaxing the consensus principle, the objective function of MED-2C is naturally formed whose inner optimization recovers the traditional MED framework. We provide an instantiation of the MED-2C method and derive the corresponding solution. Experimental results on synthetic data and real-world data show the effectiveness of the proposed MED-2C. It not only performs better than three single-view classification methods but also generally outperforms three multi-view classification methods canonical correlation analysis (CCA), ensemble MED (EMED) and the multi-view variant SVM-2K of the classical support vector machine (SVM) algorithm. In addition, MED-2C performs better than or as well as state-of-the-art MVMED on all the data sets.</description><subject>Classification</subject><subject>Discrimination</subject><subject>Kernel method</subject><subject>Large-margin</subject><subject>Learning</subject><subject>Maximum entropy</subject><subject>Maximum entropy discrimination</subject><subject>Multi-view learning</subject><subject>Optimization</subject><subject>Subspaces</subject><subject>Support vector machines</subject><issn>0020-0255</issn><issn>1872-6291</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><recordid>eNp9UE1LxDAQDaLg-vEDvOXopXXStGmLJ1n8ggUveo5pOoUsTVKTVt1_b9b1LAzMMPPeY94j5IpBzoCJm21uXMyLNOaQCsojsmJNXWSiaNkxWQEUkEFRVafkLMYtJEQtxIq8r72L6OISqXI91d5OI1p0swpm3tFOReypVd_GLpamdfDTjvYm6mCscWo23tHBB2qXcTbZp8EvqkcVoxmM_r1ekJNBjREv__o5eXu4f10_ZZuXx-f13SbTnMOcNQqrpm46hLbDVpTABRvKtim0ajXrmRZMs6btyk50rag4Ii9KXtWgy6EehpKfk-uD7hT8x4JxljZ9ieOoHPolStbwShS8Bp6g7ADVwccYcJBTcqPCTjKQ-zTlVqY05T5NCalgL3974GDykGwGGbVBp7E3AfUse2_-Yf8AbA1_QQ</recordid><startdate>20161101</startdate><enddate>20161101</enddate><creator>Chao, Guoqing</creator><creator>Sun, Shiliang</creator><general>Elsevier Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-2410-650X</orcidid></search><sort><creationdate>20161101</creationdate><title>Consensus and complementarity based maximum entropy discrimination for multi-view classification</title><author>Chao, Guoqing ; Sun, Shiliang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c330t-8ae5878be09be9640361f4982ca9c1d1c61c189b4b6b9653ee3243570c4f7ff43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Classification</topic><topic>Discrimination</topic><topic>Kernel method</topic><topic>Large-margin</topic><topic>Learning</topic><topic>Maximum entropy</topic><topic>Maximum entropy discrimination</topic><topic>Multi-view learning</topic><topic>Optimization</topic><topic>Subspaces</topic><topic>Support vector machines</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chao, Guoqing</creatorcontrib><creatorcontrib>Sun, Shiliang</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Information sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chao, Guoqing</au><au>Sun, Shiliang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Consensus and complementarity based maximum entropy discrimination for multi-view classification</atitle><jtitle>Information sciences</jtitle><date>2016-11-01</date><risdate>2016</risdate><volume>367-368</volume><spage>296</spage><epage>310</epage><pages>296-310</pages><issn>0020-0255</issn><eissn>1872-6291</eissn><abstract>Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on the maximum entropy and large margin principles, but it just uses only one view of the data not all the views of the data. Although Multi-view maximum entropy discrimination considered the multi-view information of the data, it just respects the consensus principle. There are two common principles named consensus and complementarity in multi-view learning. We aim to take the full advantage of the multi-view information of the data for classification, and respect the two common principles consensus and complementarity simultaneously. In this paper, we propose a new method called consensus and complementarity based MED (MED-2C) for multi-view classification, which well utilizes the two principles consensus and complementarity for multi-view learning (MVL). We first transform data from two views into a common subspace and make the transformed data in the new subspace identical to respect the consensus principle. Then we augment the transformed data with their original features to take into account the complementarity principle. Built on the augmented features and meanwhile by relaxing the consensus principle, the objective function of MED-2C is naturally formed whose inner optimization recovers the traditional MED framework. We provide an instantiation of the MED-2C method and derive the corresponding solution. Experimental results on synthetic data and real-world data show the effectiveness of the proposed MED-2C. It not only performs better than three single-view classification methods but also generally outperforms three multi-view classification methods canonical correlation analysis (CCA), ensemble MED (EMED) and the multi-view variant SVM-2K of the classical support vector machine (SVM) algorithm. In addition, MED-2C performs better than or as well as state-of-the-art MVMED on all the data sets.</abstract><pub>Elsevier Inc</pub><doi>10.1016/j.ins.2016.06.004</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-2410-650X</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0020-0255 |
ispartof | Information sciences, 2016-11, Vol.367-368, p.296-310 |
issn | 0020-0255 1872-6291 |
language | eng |
recordid | cdi_proquest_miscellaneous_1835623703 |
source | Access via ScienceDirect (Elsevier) |
subjects | Classification Discrimination Kernel method Large-margin Learning Maximum entropy Maximum entropy discrimination Multi-view learning Optimization Subspaces Support vector machines |
title | Consensus and complementarity based maximum entropy discrimination for multi-view classification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T04%3A34%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Consensus%20and%20complementarity%20based%20maximum%20entropy%20discrimination%20for%20multi-view%20classification&rft.jtitle=Information%20sciences&rft.au=Chao,%20Guoqing&rft.date=2016-11-01&rft.volume=367-368&rft.spage=296&rft.epage=310&rft.pages=296-310&rft.issn=0020-0255&rft.eissn=1872-6291&rft_id=info:doi/10.1016/j.ins.2016.06.004&rft_dat=%3Cproquest_cross%3E1835623703%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1835623703&rft_id=info:pmid/&rft_els_id=S002002551630411X&rfr_iscdi=true |