Different Testing Results on SVM with Double Penalty Parameters

The Support Vector Machine proposed by Vapnik is a generalized linear classifier which makes binary classification of data based on the supervised learning. SVM has been rapidly developed and has derived a series of improved and extended algorithms, which have been applied in pattern recognition, im...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Mathematical problems in engineering 2021-12, Vol.2021, p.1-5
Hauptverfasser: Yao, Chengkuan, Cao, Liyong, Xu, Jianhua, Yang, Mingya
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5
container_issue
container_start_page 1
container_title Mathematical problems in engineering
container_volume 2021
creator Yao, Chengkuan
Cao, Liyong
Xu, Jianhua
Yang, Mingya
description The Support Vector Machine proposed by Vapnik is a generalized linear classifier which makes binary classification of data based on the supervised learning. SVM has been rapidly developed and has derived a series of improved and extended algorithms, which have been applied in pattern recognition, image recognition, etc. Among the many improved algorithms, the technique of regulating the ratio of two penalty parameters according to the ratio of the sample quantities of the two classes has been widely accepted. However, the technique has not been verified in the way of rigorous mathematical proof. The experiments based on USPS sets in the study were designed to test the accuracy of the theory. The optimal parameters of the USPS sets were found through the grid-scanning method, which showed that the theory is not accurate in any case because there is absolutely no linear relationship between ratios of penalty parameters and sample sizes.
doi_str_mv 10.1155/2021/4031626
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2613962844</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2613962844</sourcerecordid><originalsourceid>FETCH-LOGICAL-c294t-bc0c581a8e77c9658284436a828a87629ee9a19840e80e07d89a5348b7d535773</originalsourceid><addsrcrecordid>eNp9kEtLw0AYRQdRsFZ3_oABlxqd92Ml0vqCikWruBum6RebkiZ1ZkLpvze1Xbu6d3G4XA5C55RcUyrlDSOM3gjCqWLqAPWoVDyTVOjDrhMmMsr41zE6iXFBOlJS00O3w7IoIECd8ARiKutv_AaxrVLETY3fP1_wukxzPGzaaQV4DLWv0gaPffBLSBDiKToqfBXhbJ999PFwPxk8ZaPXx-fB3SjLmRUpm-Ykl4Z6A1rnVknDjBBc-S690YpZAOupNYKAIUD0zFgvuTBTPZNcas376GK3uwrNT9s9dYumDd2b6Jii3Kq_wT662lF5aGIMULhVKJc-bBwlbqvIbRW5vaIOv9zh87Ke-XX5P_0LXWdjDg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2613962844</pqid></control><display><type>article</type><title>Different Testing Results on SVM with Double Penalty Parameters</title><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Wiley-Blackwell Open Access Titles</source><source>Alma/SFX Local Collection</source><creator>Yao, Chengkuan ; Cao, Liyong ; Xu, Jianhua ; Yang, Mingya</creator><contributor>Torres, Javier Martinez ; Javier Martinez Torres</contributor><creatorcontrib>Yao, Chengkuan ; Cao, Liyong ; Xu, Jianhua ; Yang, Mingya ; Torres, Javier Martinez ; Javier Martinez Torres</creatorcontrib><description>The Support Vector Machine proposed by Vapnik is a generalized linear classifier which makes binary classification of data based on the supervised learning. SVM has been rapidly developed and has derived a series of improved and extended algorithms, which have been applied in pattern recognition, image recognition, etc. Among the many improved algorithms, the technique of regulating the ratio of two penalty parameters according to the ratio of the sample quantities of the two classes has been widely accepted. However, the technique has not been verified in the way of rigorous mathematical proof. The experiments based on USPS sets in the study were designed to test the accuracy of the theory. The optimal parameters of the USPS sets were found through the grid-scanning method, which showed that the theory is not accurate in any case because there is absolutely no linear relationship between ratios of penalty parameters and sample sizes.</description><identifier>ISSN: 1024-123X</identifier><identifier>EISSN: 1563-5147</identifier><identifier>DOI: 10.1155/2021/4031626</identifier><language>eng</language><publisher>New York: Hindawi</publisher><subject>Accuracy ; Algorithms ; Artificial intelligence ; Classification ; Datasets ; Experiments ; Fines &amp; penalties ; Machine learning ; Object recognition ; Parameters ; Pattern recognition ; Support vector machines</subject><ispartof>Mathematical problems in engineering, 2021-12, Vol.2021, p.1-5</ispartof><rights>Copyright © 2021 Chengkuan Yao et al.</rights><rights>Copyright © 2021 Chengkuan Yao et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c294t-bc0c581a8e77c9658284436a828a87629ee9a19840e80e07d89a5348b7d535773</cites><orcidid>0000-0001-6864-6391 ; 0000-0002-3767-1129</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><contributor>Torres, Javier Martinez</contributor><contributor>Javier Martinez Torres</contributor><creatorcontrib>Yao, Chengkuan</creatorcontrib><creatorcontrib>Cao, Liyong</creatorcontrib><creatorcontrib>Xu, Jianhua</creatorcontrib><creatorcontrib>Yang, Mingya</creatorcontrib><title>Different Testing Results on SVM with Double Penalty Parameters</title><title>Mathematical problems in engineering</title><description>The Support Vector Machine proposed by Vapnik is a generalized linear classifier which makes binary classification of data based on the supervised learning. SVM has been rapidly developed and has derived a series of improved and extended algorithms, which have been applied in pattern recognition, image recognition, etc. Among the many improved algorithms, the technique of regulating the ratio of two penalty parameters according to the ratio of the sample quantities of the two classes has been widely accepted. However, the technique has not been verified in the way of rigorous mathematical proof. The experiments based on USPS sets in the study were designed to test the accuracy of the theory. The optimal parameters of the USPS sets were found through the grid-scanning method, which showed that the theory is not accurate in any case because there is absolutely no linear relationship between ratios of penalty parameters and sample sizes.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Classification</subject><subject>Datasets</subject><subject>Experiments</subject><subject>Fines &amp; penalties</subject><subject>Machine learning</subject><subject>Object recognition</subject><subject>Parameters</subject><subject>Pattern recognition</subject><subject>Support vector machines</subject><issn>1024-123X</issn><issn>1563-5147</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RHX</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kEtLw0AYRQdRsFZ3_oABlxqd92Ml0vqCikWruBum6RebkiZ1ZkLpvze1Xbu6d3G4XA5C55RcUyrlDSOM3gjCqWLqAPWoVDyTVOjDrhMmMsr41zE6iXFBOlJS00O3w7IoIECd8ARiKutv_AaxrVLETY3fP1_wukxzPGzaaQV4DLWv0gaPffBLSBDiKToqfBXhbJ999PFwPxk8ZaPXx-fB3SjLmRUpm-Ykl4Z6A1rnVknDjBBc-S690YpZAOupNYKAIUD0zFgvuTBTPZNcas376GK3uwrNT9s9dYumDd2b6Jii3Kq_wT662lF5aGIMULhVKJc-bBwlbqvIbRW5vaIOv9zh87Ke-XX5P_0LXWdjDg</recordid><startdate>20211218</startdate><enddate>20211218</enddate><creator>Yao, Chengkuan</creator><creator>Cao, Liyong</creator><creator>Xu, Jianhua</creator><creator>Yang, Mingya</creator><general>Hindawi</general><general>Hindawi Limited</general><scope>RHU</scope><scope>RHW</scope><scope>RHX</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7TB</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>CWDGH</scope><scope>DWQXO</scope><scope>FR3</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>KR7</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><orcidid>https://orcid.org/0000-0001-6864-6391</orcidid><orcidid>https://orcid.org/0000-0002-3767-1129</orcidid></search><sort><creationdate>20211218</creationdate><title>Different Testing Results on SVM with Double Penalty Parameters</title><author>Yao, Chengkuan ; Cao, Liyong ; Xu, Jianhua ; Yang, Mingya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c294t-bc0c581a8e77c9658284436a828a87629ee9a19840e80e07d89a5348b7d535773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Classification</topic><topic>Datasets</topic><topic>Experiments</topic><topic>Fines &amp; penalties</topic><topic>Machine learning</topic><topic>Object recognition</topic><topic>Parameters</topic><topic>Pattern recognition</topic><topic>Support vector machines</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yao, Chengkuan</creatorcontrib><creatorcontrib>Cao, Liyong</creatorcontrib><creatorcontrib>Xu, Jianhua</creatorcontrib><creatorcontrib>Yang, Mingya</creatorcontrib><collection>Hindawi Publishing Complete</collection><collection>Hindawi Publishing Subscription Journals</collection><collection>Hindawi Publishing Open Access Journals</collection><collection>CrossRef</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Middle East &amp; Africa Database</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>Mathematical problems in engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yao, Chengkuan</au><au>Cao, Liyong</au><au>Xu, Jianhua</au><au>Yang, Mingya</au><au>Torres, Javier Martinez</au><au>Javier Martinez Torres</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Different Testing Results on SVM with Double Penalty Parameters</atitle><jtitle>Mathematical problems in engineering</jtitle><date>2021-12-18</date><risdate>2021</risdate><volume>2021</volume><spage>1</spage><epage>5</epage><pages>1-5</pages><issn>1024-123X</issn><eissn>1563-5147</eissn><abstract>The Support Vector Machine proposed by Vapnik is a generalized linear classifier which makes binary classification of data based on the supervised learning. SVM has been rapidly developed and has derived a series of improved and extended algorithms, which have been applied in pattern recognition, image recognition, etc. Among the many improved algorithms, the technique of regulating the ratio of two penalty parameters according to the ratio of the sample quantities of the two classes has been widely accepted. However, the technique has not been verified in the way of rigorous mathematical proof. The experiments based on USPS sets in the study were designed to test the accuracy of the theory. The optimal parameters of the USPS sets were found through the grid-scanning method, which showed that the theory is not accurate in any case because there is absolutely no linear relationship between ratios of penalty parameters and sample sizes.</abstract><cop>New York</cop><pub>Hindawi</pub><doi>10.1155/2021/4031626</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0001-6864-6391</orcidid><orcidid>https://orcid.org/0000-0002-3767-1129</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1024-123X
ispartof Mathematical problems in engineering, 2021-12, Vol.2021, p.1-5
issn 1024-123X
1563-5147
language eng
recordid cdi_proquest_journals_2613962844
source Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Wiley-Blackwell Open Access Titles; Alma/SFX Local Collection
subjects Accuracy
Algorithms
Artificial intelligence
Classification
Datasets
Experiments
Fines & penalties
Machine learning
Object recognition
Parameters
Pattern recognition
Support vector machines
title Different Testing Results on SVM with Double Penalty Parameters
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T07%3A03%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Different%20Testing%20Results%20on%20SVM%20with%20Double%20Penalty%20Parameters&rft.jtitle=Mathematical%20problems%20in%20engineering&rft.au=Yao,%20Chengkuan&rft.date=2021-12-18&rft.volume=2021&rft.spage=1&rft.epage=5&rft.pages=1-5&rft.issn=1024-123X&rft.eissn=1563-5147&rft_id=info:doi/10.1155/2021/4031626&rft_dat=%3Cproquest_cross%3E2613962844%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2613962844&rft_id=info:pmid/&rfr_iscdi=true