Learning an Optimal Naive Bayes Classifier

The naive Bayes classifier is an efficient classification model that is easy to learn and has a high accuracy in many domains. However, it has two main drawbacks: (i) its classification accuracy decreases when the attributes are not independent, and (ii) it can not deal with nonparametric continuous...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Martinez-Arroyo, M., Sucar, L.E.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The naive Bayes classifier is an efficient classification model that is easy to learn and has a high accuracy in many domains. However, it has two main drawbacks: (i) its classification accuracy decreases when the attributes are not independent, and (ii) it can not deal with nonparametric continuous attributes. In this work we propose a method that deals with both problems, and learns an optimal naive Bayes classifier. The method includes two phases, discretization and structural improvement, which are repeated alternately until the classification accuracy can not be improved. Discretization is based on the minimum description length principle. To deal with dependent and irrelevant attributes, we apply a structural improvement method that eliminates and/or joins attributes, based on mutual and conditional information measures. The method has been tested in two different domains with good results
ISSN:1051-4651
2831-7475
DOI:10.1109/ICPR.2006.748