Using POMDPs for learning cost sensitive decision trees

In classification, an algorithm learns to classify a given instance based on a set of observed attribute values. In many real world cases testing the value of an attribute incurs a cost. Furthermore, there can also be a cost associated with the misclassification of an instance. Cost sensitive classi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Artificial intelligence 2021-03, Vol.292, p.103400, Article 103400
Hauptverfasser: Maliah, Shlomi, Shani, Guy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In classification, an algorithm learns to classify a given instance based on a set of observed attribute values. In many real world cases testing the value of an attribute incurs a cost. Furthermore, there can also be a cost associated with the misclassification of an instance. Cost sensitive classification attempts to minimize the expected cost of classification, by deciding after each observed attribute value, which attribute to measure next. In this paper we suggest Partially Observable Markov Decision Processes (POMDPs) as a modeling tool for cost sensitive classification. POMDPs are typically solved through a policy over belief states. We show how a relatively small set of potentially important belief states can be identified, and define an MDP over these belief states. To identify these potentially important belief states, we construct standard decision trees over all attribute subsets, and the leaves of these trees become the state space of our tree-based MDP. At each phase we decide on the next attribute to measure, balancing the cost of the measurement and the classification accuracy. We compare our approach to a set of previous approaches, showing our approach to work better for a range of misclassification costs.
ISSN:0004-3702
1872-7921
DOI:10.1016/j.artint.2020.103400