Locally adaptive k parameter selection for nearest neighbor classifier: one nearest cluster

The k nearest neighbors ( k -NN) classification technique has a worldly wide fame due to its simplicity, effectiveness, and robustness. As a lazy learner, k -NN is a versatile algorithm and is used in many fields. In this classifier, the k parameter is generally chosen by the user, and the optimal k...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern analysis and applications : PAA 2017-05, Vol.20 (2), p.415-425
Hauptverfasser: Bulut, Faruk, Amasyali, Mehmet Fatih
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 425
container_issue 2
container_start_page 415
container_title Pattern analysis and applications : PAA
container_volume 20
creator Bulut, Faruk
Amasyali, Mehmet Fatih
description The k nearest neighbors ( k -NN) classification technique has a worldly wide fame due to its simplicity, effectiveness, and robustness. As a lazy learner, k -NN is a versatile algorithm and is used in many fields. In this classifier, the k parameter is generally chosen by the user, and the optimal k value is found by experiments. The chosen constant k value is used during the whole classification phase. The same k value used for each test sample can decrease the overall prediction performance. The optimal k value for each test sample should vary from others in order to have more accurate predictions. In this study, a dynamic k value selection method for each instance is proposed. This improved classification method employs a simple clustering procedure. In the experiments, more accurate results are found. The reasons of success have also been understood and presented.
doi_str_mv 10.1007/s10044-015-0504-0
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_1885898264</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1885898264</sourcerecordid><originalsourceid>FETCH-LOGICAL-c316t-ef04fcfb4b817923083a6a19deb87fe74f9bd13a76ecf1f768a6b26948b027073</originalsourceid><addsrcrecordid>eNp1UMFKAzEUDKJgrX6At4Dn1WSTTbLepGgVCl4UBA8hSV_q1u1mTbZC_96UleLFy5vhMTPvMQhdUnJNCZE3KU_OC0KrglQkkyM0oZyxQlbV2_GBc3qKzlJaE8IYK9UEvS-CM227w2Zp-qH5BvyJexPNBgaIOEELbmhCh32IuAMTIQ0Zm9WHzQvXmpQa30C8xaGDg8C125Tt5-jEmzbBxS9O0evD_cvssVg8z59md4vCMSqGAjzh3nnLraKyLhlRzAhD6yVYJT1I7mu7pMxIAc5TL4Uywpai5sqSUhLJpuhqzO1j-NrmB_Q6bGOXT2qqVKVqVQqeVXRUuRhSiuB1H5uNiTtNid53qMcOde5Q7zvUJHvK0ZOytltB_JP8r-kHow11KA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1885898264</pqid></control><display><type>article</type><title>Locally adaptive k parameter selection for nearest neighbor classifier: one nearest cluster</title><source>SpringerLink Journals - AutoHoldings</source><creator>Bulut, Faruk ; Amasyali, Mehmet Fatih</creator><creatorcontrib>Bulut, Faruk ; Amasyali, Mehmet Fatih</creatorcontrib><description>The k nearest neighbors ( k -NN) classification technique has a worldly wide fame due to its simplicity, effectiveness, and robustness. As a lazy learner, k -NN is a versatile algorithm and is used in many fields. In this classifier, the k parameter is generally chosen by the user, and the optimal k value is found by experiments. The chosen constant k value is used during the whole classification phase. The same k value used for each test sample can decrease the overall prediction performance. The optimal k value for each test sample should vary from others in order to have more accurate predictions. In this study, a dynamic k value selection method for each instance is proposed. This improved classification method employs a simple clustering procedure. In the experiments, more accurate results are found. The reasons of success have also been understood and presented.</description><identifier>ISSN: 1433-7541</identifier><identifier>EISSN: 1433-755X</identifier><identifier>DOI: 10.1007/s10044-015-0504-0</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Classification ; Classifiers ; Clustering ; Components industry ; Computer Science ; Pattern Recognition ; Silicon ; Theoretical Advances</subject><ispartof>Pattern analysis and applications : PAA, 2017-05, Vol.20 (2), p.415-425</ispartof><rights>Springer-Verlag London 2015</rights><rights>Copyright Springer Science &amp; Business Media 2017</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c316t-ef04fcfb4b817923083a6a19deb87fe74f9bd13a76ecf1f768a6b26948b027073</citedby><cites>FETCH-LOGICAL-c316t-ef04fcfb4b817923083a6a19deb87fe74f9bd13a76ecf1f768a6b26948b027073</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10044-015-0504-0$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10044-015-0504-0$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,41487,42556,51318</link.rule.ids></links><search><creatorcontrib>Bulut, Faruk</creatorcontrib><creatorcontrib>Amasyali, Mehmet Fatih</creatorcontrib><title>Locally adaptive k parameter selection for nearest neighbor classifier: one nearest cluster</title><title>Pattern analysis and applications : PAA</title><addtitle>Pattern Anal Applic</addtitle><description>The k nearest neighbors ( k -NN) classification technique has a worldly wide fame due to its simplicity, effectiveness, and robustness. As a lazy learner, k -NN is a versatile algorithm and is used in many fields. In this classifier, the k parameter is generally chosen by the user, and the optimal k value is found by experiments. The chosen constant k value is used during the whole classification phase. The same k value used for each test sample can decrease the overall prediction performance. The optimal k value for each test sample should vary from others in order to have more accurate predictions. In this study, a dynamic k value selection method for each instance is proposed. This improved classification method employs a simple clustering procedure. In the experiments, more accurate results are found. The reasons of success have also been understood and presented.</description><subject>Classification</subject><subject>Classifiers</subject><subject>Clustering</subject><subject>Components industry</subject><subject>Computer Science</subject><subject>Pattern Recognition</subject><subject>Silicon</subject><subject>Theoretical Advances</subject><issn>1433-7541</issn><issn>1433-755X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><recordid>eNp1UMFKAzEUDKJgrX6At4Dn1WSTTbLepGgVCl4UBA8hSV_q1u1mTbZC_96UleLFy5vhMTPvMQhdUnJNCZE3KU_OC0KrglQkkyM0oZyxQlbV2_GBc3qKzlJaE8IYK9UEvS-CM227w2Zp-qH5BvyJexPNBgaIOEELbmhCh32IuAMTIQ0Zm9WHzQvXmpQa30C8xaGDg8C125Tt5-jEmzbBxS9O0evD_cvssVg8z59md4vCMSqGAjzh3nnLraKyLhlRzAhD6yVYJT1I7mu7pMxIAc5TL4Uywpai5sqSUhLJpuhqzO1j-NrmB_Q6bGOXT2qqVKVqVQqeVXRUuRhSiuB1H5uNiTtNid53qMcOde5Q7zvUJHvK0ZOytltB_JP8r-kHow11KA</recordid><startdate>20170501</startdate><enddate>20170501</enddate><creator>Bulut, Faruk</creator><creator>Amasyali, Mehmet Fatih</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20170501</creationdate><title>Locally adaptive k parameter selection for nearest neighbor classifier: one nearest cluster</title><author>Bulut, Faruk ; Amasyali, Mehmet Fatih</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c316t-ef04fcfb4b817923083a6a19deb87fe74f9bd13a76ecf1f768a6b26948b027073</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Classification</topic><topic>Classifiers</topic><topic>Clustering</topic><topic>Components industry</topic><topic>Computer Science</topic><topic>Pattern Recognition</topic><topic>Silicon</topic><topic>Theoretical Advances</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bulut, Faruk</creatorcontrib><creatorcontrib>Amasyali, Mehmet Fatih</creatorcontrib><collection>CrossRef</collection><jtitle>Pattern analysis and applications : PAA</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bulut, Faruk</au><au>Amasyali, Mehmet Fatih</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Locally adaptive k parameter selection for nearest neighbor classifier: one nearest cluster</atitle><jtitle>Pattern analysis and applications : PAA</jtitle><stitle>Pattern Anal Applic</stitle><date>2017-05-01</date><risdate>2017</risdate><volume>20</volume><issue>2</issue><spage>415</spage><epage>425</epage><pages>415-425</pages><issn>1433-7541</issn><eissn>1433-755X</eissn><abstract>The k nearest neighbors ( k -NN) classification technique has a worldly wide fame due to its simplicity, effectiveness, and robustness. As a lazy learner, k -NN is a versatile algorithm and is used in many fields. In this classifier, the k parameter is generally chosen by the user, and the optimal k value is found by experiments. The chosen constant k value is used during the whole classification phase. The same k value used for each test sample can decrease the overall prediction performance. The optimal k value for each test sample should vary from others in order to have more accurate predictions. In this study, a dynamic k value selection method for each instance is proposed. This improved classification method employs a simple clustering procedure. In the experiments, more accurate results are found. The reasons of success have also been understood and presented.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s10044-015-0504-0</doi><tpages>11</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1433-7541
ispartof Pattern analysis and applications : PAA, 2017-05, Vol.20 (2), p.415-425
issn 1433-7541
1433-755X
language eng
recordid cdi_proquest_journals_1885898264
source SpringerLink Journals - AutoHoldings
subjects Classification
Classifiers
Clustering
Components industry
Computer Science
Pattern Recognition
Silicon
Theoretical Advances
title Locally adaptive k parameter selection for nearest neighbor classifier: one nearest cluster
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T18%3A40%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Locally%20adaptive%20k%20parameter%20selection%20for%20nearest%20neighbor%20classifier:%20one%20nearest%20cluster&rft.jtitle=Pattern%20analysis%20and%20applications%20:%20PAA&rft.au=Bulut,%20Faruk&rft.date=2017-05-01&rft.volume=20&rft.issue=2&rft.spage=415&rft.epage=425&rft.pages=415-425&rft.issn=1433-7541&rft.eissn=1433-755X&rft_id=info:doi/10.1007/s10044-015-0504-0&rft_dat=%3Cproquest_cross%3E1885898264%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1885898264&rft_id=info:pmid/&rfr_iscdi=true