Cooperative Clustering for Training SVMs

Support vector machines are currently very popular approaches to supervised learning. Unfortunately, the computational load for training and classification procedures increases drastically with size of the training data set. In this paper, a method called cooperative clustering is proposed. With thi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Tian, Shengfeng, Mu, Shaomin, Yin, Chuanhuan
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 967
container_issue
container_start_page 962
container_title
container_volume
creator Tian, Shengfeng
Mu, Shaomin
Yin, Chuanhuan
description Support vector machines are currently very popular approaches to supervised learning. Unfortunately, the computational load for training and classification procedures increases drastically with size of the training data set. In this paper, a method called cooperative clustering is proposed. With this procedure, the set of data points with pre-determined size near the border of two classes is determined. This small set of data points is taken as the set of support vectors. The training of support vector machine is performed on this set of data points. With this approach, training efficiency and classification efficiency are achieved with small effects on generalization performance. This approach can also be used to reduce the number of support vectors in regression problems.
doi_str_mv 10.1007/11759966_141
format Conference Proceeding
fullrecord <record><control><sourceid>pascalfrancis_sprin</sourceid><recordid>TN_cdi_pascalfrancis_primary_19952712</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>19952712</sourcerecordid><originalsourceid>FETCH-LOGICAL-c263t-3fc1c93eaaba5da8fdfd602040309cdf223f1a0cdf12cf361c4ae368e6a2d4fc3</originalsourceid><addsrcrecordid>eNpNUE1LAzEQjV_gWnvzB_QiiLA6k8lmm6MsfkHFg1W8hWk2kdW6uySt4L93S0Wcw5sZ3uPxeEKcIFwgQHmJWBbGaG1R4Y44okIBKTXArshQI-ZEyuz9EWRe90UGBDI3paJDMU7pHYYh1ECUibOq63ofedV8-Um1XKeVj037NgldnMwjN-3meXp5SMfiIPAy-fHvHonnm-t5dZfPHm_vq6tZ7qSmVU7BoTPkmRdc1DwNdag1SBjSgHF1kJICMgwXShdIo1PsSU-9Zlmr4GgkTre-PSfHyxC5dU2yfWw-OX5bNKaQJcpBd77VpX4T2Ee76LqPZBHspij7vyj6AQbzVUs</addsrcrecordid><sourcetype>Index Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Cooperative Clustering for Training SVMs</title><source>Springer Books</source><creator>Tian, Shengfeng ; Mu, Shaomin ; Yin, Chuanhuan</creator><contributor>Zurada, Jacek M. ; Lu, Bao-Liang ; Yi, Zhang ; Yin, Hujun ; Wang, Jun</contributor><creatorcontrib>Tian, Shengfeng ; Mu, Shaomin ; Yin, Chuanhuan ; Zurada, Jacek M. ; Lu, Bao-Liang ; Yi, Zhang ; Yin, Hujun ; Wang, Jun</creatorcontrib><description>Support vector machines are currently very popular approaches to supervised learning. Unfortunately, the computational load for training and classification procedures increases drastically with size of the training data set. In this paper, a method called cooperative clustering is proposed. With this procedure, the set of data points with pre-determined size near the border of two classes is determined. This small set of data points is taken as the set of support vectors. The training of support vector machine is performed on this set of data points. With this approach, training efficiency and classification efficiency are achieved with small effects on generalization performance. This approach can also be used to reduce the number of support vectors in regression problems.</description><identifier>ISSN: 0302-9743</identifier><identifier>ISBN: 354034439X</identifier><identifier>ISBN: 9783540344391</identifier><identifier>EISSN: 1611-3349</identifier><identifier>EISBN: 3540344403</identifier><identifier>EISBN: 9783540344407</identifier><identifier>DOI: 10.1007/11759966_141</identifier><language>eng</language><publisher>Berlin, Heidelberg: Springer Berlin Heidelberg</publisher><subject>Applied sciences ; Artificial intelligence ; Cluster Center ; Computer science; control theory; systems ; Data processing. List processing. Character string processing ; Exact sciences and technology ; Memory organisation. Data processing ; Regression Problem ; Software ; Support Vector ; Support Vector Machine ; Training Algorithm</subject><ispartof>Advances in Neural Networks - ISNN 2006, 2006, p.962-967</ispartof><rights>Springer-Verlag Berlin Heidelberg 2006</rights><rights>2008 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c263t-3fc1c93eaaba5da8fdfd602040309cdf223f1a0cdf12cf361c4ae368e6a2d4fc3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/11759966_141$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/11759966_141$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>309,310,779,780,784,789,790,793,4050,4051,27925,38255,41442,42511</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=19952712$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><contributor>Zurada, Jacek M.</contributor><contributor>Lu, Bao-Liang</contributor><contributor>Yi, Zhang</contributor><contributor>Yin, Hujun</contributor><contributor>Wang, Jun</contributor><creatorcontrib>Tian, Shengfeng</creatorcontrib><creatorcontrib>Mu, Shaomin</creatorcontrib><creatorcontrib>Yin, Chuanhuan</creatorcontrib><title>Cooperative Clustering for Training SVMs</title><title>Advances in Neural Networks - ISNN 2006</title><description>Support vector machines are currently very popular approaches to supervised learning. Unfortunately, the computational load for training and classification procedures increases drastically with size of the training data set. In this paper, a method called cooperative clustering is proposed. With this procedure, the set of data points with pre-determined size near the border of two classes is determined. This small set of data points is taken as the set of support vectors. The training of support vector machine is performed on this set of data points. With this approach, training efficiency and classification efficiency are achieved with small effects on generalization performance. This approach can also be used to reduce the number of support vectors in regression problems.</description><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Cluster Center</subject><subject>Computer science; control theory; systems</subject><subject>Data processing. List processing. Character string processing</subject><subject>Exact sciences and technology</subject><subject>Memory organisation. Data processing</subject><subject>Regression Problem</subject><subject>Software</subject><subject>Support Vector</subject><subject>Support Vector Machine</subject><subject>Training Algorithm</subject><issn>0302-9743</issn><issn>1611-3349</issn><isbn>354034439X</isbn><isbn>9783540344391</isbn><isbn>3540344403</isbn><isbn>9783540344407</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2006</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNpNUE1LAzEQjV_gWnvzB_QiiLA6k8lmm6MsfkHFg1W8hWk2kdW6uySt4L93S0Wcw5sZ3uPxeEKcIFwgQHmJWBbGaG1R4Y44okIBKTXArshQI-ZEyuz9EWRe90UGBDI3paJDMU7pHYYh1ECUibOq63ofedV8-Um1XKeVj037NgldnMwjN-3meXp5SMfiIPAy-fHvHonnm-t5dZfPHm_vq6tZ7qSmVU7BoTPkmRdc1DwNdag1SBjSgHF1kJICMgwXShdIo1PsSU-9Zlmr4GgkTre-PSfHyxC5dU2yfWw-OX5bNKaQJcpBd77VpX4T2Ee76LqPZBHspij7vyj6AQbzVUs</recordid><startdate>2006</startdate><enddate>2006</enddate><creator>Tian, Shengfeng</creator><creator>Mu, Shaomin</creator><creator>Yin, Chuanhuan</creator><general>Springer Berlin Heidelberg</general><general>Springer</general><scope>IQODW</scope></search><sort><creationdate>2006</creationdate><title>Cooperative Clustering for Training SVMs</title><author>Tian, Shengfeng ; Mu, Shaomin ; Yin, Chuanhuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c263t-3fc1c93eaaba5da8fdfd602040309cdf223f1a0cdf12cf361c4ae368e6a2d4fc3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2006</creationdate><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Cluster Center</topic><topic>Computer science; control theory; systems</topic><topic>Data processing. List processing. Character string processing</topic><topic>Exact sciences and technology</topic><topic>Memory organisation. Data processing</topic><topic>Regression Problem</topic><topic>Software</topic><topic>Support Vector</topic><topic>Support Vector Machine</topic><topic>Training Algorithm</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tian, Shengfeng</creatorcontrib><creatorcontrib>Mu, Shaomin</creatorcontrib><creatorcontrib>Yin, Chuanhuan</creatorcontrib><collection>Pascal-Francis</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tian, Shengfeng</au><au>Mu, Shaomin</au><au>Yin, Chuanhuan</au><au>Zurada, Jacek M.</au><au>Lu, Bao-Liang</au><au>Yi, Zhang</au><au>Yin, Hujun</au><au>Wang, Jun</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Cooperative Clustering for Training SVMs</atitle><btitle>Advances in Neural Networks - ISNN 2006</btitle><date>2006</date><risdate>2006</risdate><spage>962</spage><epage>967</epage><pages>962-967</pages><issn>0302-9743</issn><eissn>1611-3349</eissn><isbn>354034439X</isbn><isbn>9783540344391</isbn><eisbn>3540344403</eisbn><eisbn>9783540344407</eisbn><abstract>Support vector machines are currently very popular approaches to supervised learning. Unfortunately, the computational load for training and classification procedures increases drastically with size of the training data set. In this paper, a method called cooperative clustering is proposed. With this procedure, the set of data points with pre-determined size near the border of two classes is determined. This small set of data points is taken as the set of support vectors. The training of support vector machine is performed on this set of data points. With this approach, training efficiency and classification efficiency are achieved with small effects on generalization performance. This approach can also be used to reduce the number of support vectors in regression problems.</abstract><cop>Berlin, Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/11759966_141</doi><tpages>6</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0302-9743
ispartof Advances in Neural Networks - ISNN 2006, 2006, p.962-967
issn 0302-9743
1611-3349
language eng
recordid cdi_pascalfrancis_primary_19952712
source Springer Books
subjects Applied sciences
Artificial intelligence
Cluster Center
Computer science
control theory
systems
Data processing. List processing. Character string processing
Exact sciences and technology
Memory organisation. Data processing
Regression Problem
Software
Support Vector
Support Vector Machine
Training Algorithm
title Cooperative Clustering for Training SVMs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T22%3A08%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-pascalfrancis_sprin&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Cooperative%20Clustering%20for%20Training%20SVMs&rft.btitle=Advances%20in%20Neural%20Networks%20-%20ISNN%202006&rft.au=Tian,%20Shengfeng&rft.date=2006&rft.spage=962&rft.epage=967&rft.pages=962-967&rft.issn=0302-9743&rft.eissn=1611-3349&rft.isbn=354034439X&rft.isbn_list=9783540344391&rft_id=info:doi/10.1007/11759966_141&rft_dat=%3Cpascalfrancis_sprin%3E19952712%3C/pascalfrancis_sprin%3E%3Curl%3E%3C/url%3E&rft.eisbn=3540344403&rft.eisbn_list=9783540344407&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true