Sparse ensembles using weighted combination methods based on linear programming
An ensemble of multiple classifiers is widely considered to be an effective technique for improving accuracy and stability of a single classifier. This paper proposes a framework of sparse ensembles and deals with new linear weighted combination methods for sparse ensembles. Sparse ensemble is to sp...
Gespeichert in:
Veröffentlicht in: | Pattern recognition 2011, Vol.44 (1), p.97-106 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 106 |
---|---|
container_issue | 1 |
container_start_page | 97 |
container_title | Pattern recognition |
container_volume | 44 |
creator | Zhang, Li Zhou, Wei-Da |
description | An ensemble of multiple classifiers is widely considered to be an effective technique for improving accuracy and stability of a single classifier. This paper proposes a framework of sparse ensembles and deals with new linear weighted combination methods for sparse ensembles. Sparse ensemble is to sparsely combine the outputs of multiple classifiers by using a sparse weight vector. When the continuous outputs of multiple classifiers are provided in our methods, the problem of solving sparse weight vector can be formulated as linear programming problems in which the hinge loss or/and the 1-norm regularization are exploited. Both the hinge loss and the 1-norm regularization are techniques inducing sparsity used in machine learning. We only ensemble classifiers with nonzero weight coefficients. In these LP-based methods, the ensemble training error is minimized while the weight vector of ensemble learning is controlled, which can be thought as implementing the structure risk minimization rule and naturally explains good performance of these methods. The promising experimental results over UCI data sets and the radar high-resolution range profile data are presented. |
doi_str_mv | 10.1016/j.patcog.2010.07.021 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_849482273</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0031320310003602</els_id><sourcerecordid>849482273</sourcerecordid><originalsourceid>FETCH-LOGICAL-c368t-6121f2cc4e8fd72980fef914940859100f34023d707cf992c61e867f85f18dc33</originalsourceid><addsrcrecordid>eNp9UEtLxDAQDqLg-vgHHnoRT10nSdukF0HEFyzsQT2HbDqpWfpYM13Ff2-WFY-eBr4n8zF2wWHOgVfX6_nGTm5s5wISBGoOgh-wGddK5iUvxCGbAUieSwHymJ0QrQG4SsSMLV82NhJmOBD2qw4p21IY2uwLQ_s-YZO5sV-FwU5hHLIep_exoWxlKTEJ6MKANmabOLbR9n0ynrEjbzvC8997yt4e7l_vnvLF8vH57naRO1npKa-44F44V6D2jRK1Bo--5kVdgC5rDuBlAUI2CpTzdS1cxVFXyuvSc904KU_Z1T43dX9skSbTB3LYdXbAcUtGpygthNopi73SxZEoojebGHobvw0Hs5vPrM1-PrObz4Ayab5ku_wtsORs56MdXKA_r5CiFnVZJt3NXofp28-A0ZALODhsQkQ3mWYM_xf9ANxzh4g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>849482273</pqid></control><display><type>article</type><title>Sparse ensembles using weighted combination methods based on linear programming</title><source>ScienceDirect Journals (5 years ago - present)</source><creator>Zhang, Li ; Zhou, Wei-Da</creator><creatorcontrib>Zhang, Li ; Zhou, Wei-Da</creatorcontrib><description>An ensemble of multiple classifiers is widely considered to be an effective technique for improving accuracy and stability of a single classifier. This paper proposes a framework of sparse ensembles and deals with new linear weighted combination methods for sparse ensembles. Sparse ensemble is to sparsely combine the outputs of multiple classifiers by using a sparse weight vector. When the continuous outputs of multiple classifiers are provided in our methods, the problem of solving sparse weight vector can be formulated as linear programming problems in which the hinge loss or/and the 1-norm regularization are exploited. Both the hinge loss and the 1-norm regularization are techniques inducing sparsity used in machine learning. We only ensemble classifiers with nonzero weight coefficients. In these LP-based methods, the ensemble training error is minimized while the weight vector of ensemble learning is controlled, which can be thought as implementing the structure risk minimization rule and naturally explains good performance of these methods. The promising experimental results over UCI data sets and the radar high-resolution range profile data are presented.</description><identifier>ISSN: 0031-3203</identifier><identifier>EISSN: 1873-5142</identifier><identifier>DOI: 10.1016/j.patcog.2010.07.021</identifier><identifier>CODEN: PTNRA8</identifier><language>eng</language><publisher>Kidlington: Elsevier Ltd</publisher><subject>Applied sciences ; Classifier ensemble ; Classifiers ; Exact sciences and technology ; Hinges ; Information, signal and communications theory ; k nearest neighbor ; Linear programming ; Linear weighted combination ; Mathematical analysis ; Regularization ; Risk ; Signal and communications theory ; Signal representation. Spectral analysis ; Signal, noise ; Sparse ensembles ; Telecommunications and information theory ; Vectors (mathematics)</subject><ispartof>Pattern recognition, 2011, Vol.44 (1), p.97-106</ispartof><rights>2010 Elsevier Ltd</rights><rights>2015 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c368t-6121f2cc4e8fd72980fef914940859100f34023d707cf992c61e867f85f18dc33</citedby><cites>FETCH-LOGICAL-c368t-6121f2cc4e8fd72980fef914940859100f34023d707cf992c61e867f85f18dc33</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.patcog.2010.07.021$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,778,782,3539,4012,27912,27913,27914,45984</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=23292955$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Li</creatorcontrib><creatorcontrib>Zhou, Wei-Da</creatorcontrib><title>Sparse ensembles using weighted combination methods based on linear programming</title><title>Pattern recognition</title><description>An ensemble of multiple classifiers is widely considered to be an effective technique for improving accuracy and stability of a single classifier. This paper proposes a framework of sparse ensembles and deals with new linear weighted combination methods for sparse ensembles. Sparse ensemble is to sparsely combine the outputs of multiple classifiers by using a sparse weight vector. When the continuous outputs of multiple classifiers are provided in our methods, the problem of solving sparse weight vector can be formulated as linear programming problems in which the hinge loss or/and the 1-norm regularization are exploited. Both the hinge loss and the 1-norm regularization are techniques inducing sparsity used in machine learning. We only ensemble classifiers with nonzero weight coefficients. In these LP-based methods, the ensemble training error is minimized while the weight vector of ensemble learning is controlled, which can be thought as implementing the structure risk minimization rule and naturally explains good performance of these methods. The promising experimental results over UCI data sets and the radar high-resolution range profile data are presented.</description><subject>Applied sciences</subject><subject>Classifier ensemble</subject><subject>Classifiers</subject><subject>Exact sciences and technology</subject><subject>Hinges</subject><subject>Information, signal and communications theory</subject><subject>k nearest neighbor</subject><subject>Linear programming</subject><subject>Linear weighted combination</subject><subject>Mathematical analysis</subject><subject>Regularization</subject><subject>Risk</subject><subject>Signal and communications theory</subject><subject>Signal representation. Spectral analysis</subject><subject>Signal, noise</subject><subject>Sparse ensembles</subject><subject>Telecommunications and information theory</subject><subject>Vectors (mathematics)</subject><issn>0031-3203</issn><issn>1873-5142</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2011</creationdate><recordtype>article</recordtype><recordid>eNp9UEtLxDAQDqLg-vgHHnoRT10nSdukF0HEFyzsQT2HbDqpWfpYM13Ff2-WFY-eBr4n8zF2wWHOgVfX6_nGTm5s5wISBGoOgh-wGddK5iUvxCGbAUieSwHymJ0QrQG4SsSMLV82NhJmOBD2qw4p21IY2uwLQ_s-YZO5sV-FwU5hHLIep_exoWxlKTEJ6MKANmabOLbR9n0ynrEjbzvC8997yt4e7l_vnvLF8vH57naRO1npKa-44F44V6D2jRK1Bo--5kVdgC5rDuBlAUI2CpTzdS1cxVFXyuvSc904KU_Z1T43dX9skSbTB3LYdXbAcUtGpygthNopi73SxZEoojebGHobvw0Hs5vPrM1-PrObz4Ayab5ku_wtsORs56MdXKA_r5CiFnVZJt3NXofp28-A0ZALODhsQkQ3mWYM_xf9ANxzh4g</recordid><startdate>2011</startdate><enddate>2011</enddate><creator>Zhang, Li</creator><creator>Zhou, Wei-Da</creator><general>Elsevier Ltd</general><general>Elsevier</general><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>2011</creationdate><title>Sparse ensembles using weighted combination methods based on linear programming</title><author>Zhang, Li ; Zhou, Wei-Da</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c368t-6121f2cc4e8fd72980fef914940859100f34023d707cf992c61e867f85f18dc33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2011</creationdate><topic>Applied sciences</topic><topic>Classifier ensemble</topic><topic>Classifiers</topic><topic>Exact sciences and technology</topic><topic>Hinges</topic><topic>Information, signal and communications theory</topic><topic>k nearest neighbor</topic><topic>Linear programming</topic><topic>Linear weighted combination</topic><topic>Mathematical analysis</topic><topic>Regularization</topic><topic>Risk</topic><topic>Signal and communications theory</topic><topic>Signal representation. Spectral analysis</topic><topic>Signal, noise</topic><topic>Sparse ensembles</topic><topic>Telecommunications and information theory</topic><topic>Vectors (mathematics)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Li</creatorcontrib><creatorcontrib>Zhou, Wei-Da</creatorcontrib><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Pattern recognition</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Li</au><au>Zhou, Wei-Da</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sparse ensembles using weighted combination methods based on linear programming</atitle><jtitle>Pattern recognition</jtitle><date>2011</date><risdate>2011</risdate><volume>44</volume><issue>1</issue><spage>97</spage><epage>106</epage><pages>97-106</pages><issn>0031-3203</issn><eissn>1873-5142</eissn><coden>PTNRA8</coden><abstract>An ensemble of multiple classifiers is widely considered to be an effective technique for improving accuracy and stability of a single classifier. This paper proposes a framework of sparse ensembles and deals with new linear weighted combination methods for sparse ensembles. Sparse ensemble is to sparsely combine the outputs of multiple classifiers by using a sparse weight vector. When the continuous outputs of multiple classifiers are provided in our methods, the problem of solving sparse weight vector can be formulated as linear programming problems in which the hinge loss or/and the 1-norm regularization are exploited. Both the hinge loss and the 1-norm regularization are techniques inducing sparsity used in machine learning. We only ensemble classifiers with nonzero weight coefficients. In these LP-based methods, the ensemble training error is minimized while the weight vector of ensemble learning is controlled, which can be thought as implementing the structure risk minimization rule and naturally explains good performance of these methods. The promising experimental results over UCI data sets and the radar high-resolution range profile data are presented.</abstract><cop>Kidlington</cop><pub>Elsevier Ltd</pub><doi>10.1016/j.patcog.2010.07.021</doi><tpages>10</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0031-3203 |
ispartof | Pattern recognition, 2011, Vol.44 (1), p.97-106 |
issn | 0031-3203 1873-5142 |
language | eng |
recordid | cdi_proquest_miscellaneous_849482273 |
source | ScienceDirect Journals (5 years ago - present) |
subjects | Applied sciences Classifier ensemble Classifiers Exact sciences and technology Hinges Information, signal and communications theory k nearest neighbor Linear programming Linear weighted combination Mathematical analysis Regularization Risk Signal and communications theory Signal representation. Spectral analysis Signal, noise Sparse ensembles Telecommunications and information theory Vectors (mathematics) |
title | Sparse ensembles using weighted combination methods based on linear programming |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T08%3A06%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sparse%20ensembles%20using%20weighted%20combination%20methods%20based%20on%20linear%20programming&rft.jtitle=Pattern%20recognition&rft.au=Zhang,%20Li&rft.date=2011&rft.volume=44&rft.issue=1&rft.spage=97&rft.epage=106&rft.pages=97-106&rft.issn=0031-3203&rft.eissn=1873-5142&rft.coden=PTNRA8&rft_id=info:doi/10.1016/j.patcog.2010.07.021&rft_dat=%3Cproquest_cross%3E849482273%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=849482273&rft_id=info:pmid/&rft_els_id=S0031320310003602&rfr_iscdi=true |