Prototype reduction of the nearest neighbor classifier by weighting training instances

Nearest Neighbor (NN) algorithm is one of the most straightforward instance-based algorithms that is faced with the problem of deciding which instances to store for use during generalization. In its basic form, all training instances are stored and treated equally to find the NN of an input test pat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Parvinnia, E, Jahromi, M Z
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5
container_issue
container_start_page 1
container_title
container_volume
creator Parvinnia, E
Jahromi, M Z
description Nearest Neighbor (NN) algorithm is one of the most straightforward instance-based algorithms that is faced with the problem of deciding which instances to store for use during generalization. In its basic form, all training instances are stored and treated equally to find the NN of an input test pattern. In the scheme proposed in this paper, a weight that is a real number in the interval [0, ∞], is assigned to each training pattern. The weights of training patterns are used during generalization to find the NN of a query pattern. To specify the weights of training patterns, we propose a learning algorithm that minimizes the error-rate of the classifier on train data. At the same time, the algorithm reduces the size of the training set and can be viewed as a powerful instance reduction technique. An instance having zero weight is not used in the generalization phase and can be virtually removed from the training set. The classifier learned in this way might suffer from over fitting especially in the case of noisy data sets with highly overlapped classes. To tackle this problem, we use a noise-filtering algorithm to remove noisy patterns from the training set before applying the learning scheme. Using 11 data sets from UCI repository, we show that the proposed method not only improves the generalization accuracy of the basic NN classifier but also provides prototype reduction and is more effective than other algorithms proposed for this purpose in the literature.
doi_str_mv 10.1109/IEEEGCC.2009.5734315
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_5734315</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5734315</ieee_id><sourcerecordid>5734315</sourcerecordid><originalsourceid>FETCH-LOGICAL-i90t-1ae8e8e422986354e50548e0b5ad1071a00f976a37e973298b2f47d5adad3a153</originalsourceid><addsrcrecordid>eNotkM1qwzAQhAUl0CbNE7QHvUDc1V8kHYtx00AgPYReg2yvE5VUDpJK8dvXpmEP38zOMocl5JlBwRjYl21VVZuyLDiALZQWUjB1R-ZMcimFMUrMyHzKLIDi7J4sU_oCmIzmBh7I50fsc5-HK9KI7U-TfR9o39F8RhrQRUx5pD-d6z7S5uJS8p3HSOuB_k7r7MOJ5uh8mIQPKbvQYHoks85dEi5vXJDDW3Uo31e7_WZbvu5W3kJeMYdmHMm5NWuhJCpQ0iDUyrUMNHMAndVrJzRaLcajmndSt2PqWuGYEgvy9F_rEfF4jf7bxeF4e4P4A_AAUsg</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Prototype reduction of the nearest neighbor classifier by weighting training instances</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Parvinnia, E ; Jahromi, M Z</creator><creatorcontrib>Parvinnia, E ; Jahromi, M Z</creatorcontrib><description>Nearest Neighbor (NN) algorithm is one of the most straightforward instance-based algorithms that is faced with the problem of deciding which instances to store for use during generalization. In its basic form, all training instances are stored and treated equally to find the NN of an input test pattern. In the scheme proposed in this paper, a weight that is a real number in the interval [0, ∞], is assigned to each training pattern. The weights of training patterns are used during generalization to find the NN of a query pattern. To specify the weights of training patterns, we propose a learning algorithm that minimizes the error-rate of the classifier on train data. At the same time, the algorithm reduces the size of the training set and can be viewed as a powerful instance reduction technique. An instance having zero weight is not used in the generalization phase and can be virtually removed from the training set. The classifier learned in this way might suffer from over fitting especially in the case of noisy data sets with highly overlapped classes. To tackle this problem, we use a noise-filtering algorithm to remove noisy patterns from the training set before applying the learning scheme. Using 11 data sets from UCI repository, we show that the proposed method not only improves the generalization accuracy of the basic NN classifier but also provides prototype reduction and is more effective than other algorithms proposed for this purpose in the literature.</description><identifier>ISBN: 1424438853</identifier><identifier>ISBN: 9781424438853</identifier><identifier>DOI: 10.1109/IEEEGCC.2009.5734315</identifier><identifier>LCCN: 2009900521</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; Adaptive Distance Measure ; Artificial neural networks ; Classification algorithms ; Measurement ; Nearest Neighbor ; Nearest neighbor searches ; Prototype Reduction ; Prototypes ; Training ; Weighted Instances</subject><ispartof>2009 5th IEEE GCC Conference &amp; Exhibition, 2009, p.1-5</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5734315$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2051,27904,54899</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5734315$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Parvinnia, E</creatorcontrib><creatorcontrib>Jahromi, M Z</creatorcontrib><title>Prototype reduction of the nearest neighbor classifier by weighting training instances</title><title>2009 5th IEEE GCC Conference &amp; Exhibition</title><addtitle>IEEEGCC</addtitle><description>Nearest Neighbor (NN) algorithm is one of the most straightforward instance-based algorithms that is faced with the problem of deciding which instances to store for use during generalization. In its basic form, all training instances are stored and treated equally to find the NN of an input test pattern. In the scheme proposed in this paper, a weight that is a real number in the interval [0, ∞], is assigned to each training pattern. The weights of training patterns are used during generalization to find the NN of a query pattern. To specify the weights of training patterns, we propose a learning algorithm that minimizes the error-rate of the classifier on train data. At the same time, the algorithm reduces the size of the training set and can be viewed as a powerful instance reduction technique. An instance having zero weight is not used in the generalization phase and can be virtually removed from the training set. The classifier learned in this way might suffer from over fitting especially in the case of noisy data sets with highly overlapped classes. To tackle this problem, we use a noise-filtering algorithm to remove noisy patterns from the training set before applying the learning scheme. Using 11 data sets from UCI repository, we show that the proposed method not only improves the generalization accuracy of the basic NN classifier but also provides prototype reduction and is more effective than other algorithms proposed for this purpose in the literature.</description><subject>Accuracy</subject><subject>Adaptive Distance Measure</subject><subject>Artificial neural networks</subject><subject>Classification algorithms</subject><subject>Measurement</subject><subject>Nearest Neighbor</subject><subject>Nearest neighbor searches</subject><subject>Prototype Reduction</subject><subject>Prototypes</subject><subject>Training</subject><subject>Weighted Instances</subject><isbn>1424438853</isbn><isbn>9781424438853</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2009</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotkM1qwzAQhAUl0CbNE7QHvUDc1V8kHYtx00AgPYReg2yvE5VUDpJK8dvXpmEP38zOMocl5JlBwRjYl21VVZuyLDiALZQWUjB1R-ZMcimFMUrMyHzKLIDi7J4sU_oCmIzmBh7I50fsc5-HK9KI7U-TfR9o39F8RhrQRUx5pD-d6z7S5uJS8p3HSOuB_k7r7MOJ5uh8mIQPKbvQYHoks85dEi5vXJDDW3Uo31e7_WZbvu5W3kJeMYdmHMm5NWuhJCpQ0iDUyrUMNHMAndVrJzRaLcajmndSt2PqWuGYEgvy9F_rEfF4jf7bxeF4e4P4A_AAUsg</recordid><startdate>200903</startdate><enddate>200903</enddate><creator>Parvinnia, E</creator><creator>Jahromi, M Z</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>200903</creationdate><title>Prototype reduction of the nearest neighbor classifier by weighting training instances</title><author>Parvinnia, E ; Jahromi, M Z</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i90t-1ae8e8e422986354e50548e0b5ad1071a00f976a37e973298b2f47d5adad3a153</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2009</creationdate><topic>Accuracy</topic><topic>Adaptive Distance Measure</topic><topic>Artificial neural networks</topic><topic>Classification algorithms</topic><topic>Measurement</topic><topic>Nearest Neighbor</topic><topic>Nearest neighbor searches</topic><topic>Prototype Reduction</topic><topic>Prototypes</topic><topic>Training</topic><topic>Weighted Instances</topic><toplevel>online_resources</toplevel><creatorcontrib>Parvinnia, E</creatorcontrib><creatorcontrib>Jahromi, M Z</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Parvinnia, E</au><au>Jahromi, M Z</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Prototype reduction of the nearest neighbor classifier by weighting training instances</atitle><btitle>2009 5th IEEE GCC Conference &amp; Exhibition</btitle><stitle>IEEEGCC</stitle><date>2009-03</date><risdate>2009</risdate><spage>1</spage><epage>5</epage><pages>1-5</pages><isbn>1424438853</isbn><isbn>9781424438853</isbn><abstract>Nearest Neighbor (NN) algorithm is one of the most straightforward instance-based algorithms that is faced with the problem of deciding which instances to store for use during generalization. In its basic form, all training instances are stored and treated equally to find the NN of an input test pattern. In the scheme proposed in this paper, a weight that is a real number in the interval [0, ∞], is assigned to each training pattern. The weights of training patterns are used during generalization to find the NN of a query pattern. To specify the weights of training patterns, we propose a learning algorithm that minimizes the error-rate of the classifier on train data. At the same time, the algorithm reduces the size of the training set and can be viewed as a powerful instance reduction technique. An instance having zero weight is not used in the generalization phase and can be virtually removed from the training set. The classifier learned in this way might suffer from over fitting especially in the case of noisy data sets with highly overlapped classes. To tackle this problem, we use a noise-filtering algorithm to remove noisy patterns from the training set before applying the learning scheme. Using 11 data sets from UCI repository, we show that the proposed method not only improves the generalization accuracy of the basic NN classifier but also provides prototype reduction and is more effective than other algorithms proposed for this purpose in the literature.</abstract><pub>IEEE</pub><doi>10.1109/IEEEGCC.2009.5734315</doi><tpages>5</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISBN: 1424438853
ispartof 2009 5th IEEE GCC Conference & Exhibition, 2009, p.1-5
issn
language eng
recordid cdi_ieee_primary_5734315
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Accuracy
Adaptive Distance Measure
Artificial neural networks
Classification algorithms
Measurement
Nearest Neighbor
Nearest neighbor searches
Prototype Reduction
Prototypes
Training
Weighted Instances
title Prototype reduction of the nearest neighbor classifier by weighting training instances
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T18%3A04%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Prototype%20reduction%20of%20the%20nearest%20neighbor%20classifier%20by%20weighting%20training%20instances&rft.btitle=2009%205th%20IEEE%20GCC%20Conference%20&%20Exhibition&rft.au=Parvinnia,%20E&rft.date=2009-03&rft.spage=1&rft.epage=5&rft.pages=1-5&rft.isbn=1424438853&rft.isbn_list=9781424438853&rft_id=info:doi/10.1109/IEEEGCC.2009.5734315&rft_dat=%3Cieee_6IE%3E5734315%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=5734315&rfr_iscdi=true