Learning Hierarchy Aware Features for Reducing Mistake Severity
Label hierarchies are often available apriori as part of biological taxonomy or language datasets WordNet. Several works exploit these to learn hierarchy aware features in order to improve the classifier to make semantically meaningful mistakes while maintaining or reducing the overall error. In thi...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Label hierarchies are often available apriori as part of biological taxonomy
or language datasets WordNet. Several works exploit these to learn hierarchy
aware features in order to improve the classifier to make semantically
meaningful mistakes while maintaining or reducing the overall error. In this
paper, we propose a novel approach for learning Hierarchy Aware Features (HAF)
that leverages classifiers at each level of the hierarchy that are constrained
to generate predictions consistent with the label hierarchy. The classifiers
are trained by minimizing a Jensen-Shannon Divergence with target soft labels
obtained from the fine-grained classifiers. Additionally, we employ a simple
geometric loss that constrains the feature space geometry to capture the
semantic structure of the label space. HAF is a training time approach that
improves the mistakes while maintaining top-1 error, thereby, addressing the
problem of cross-entropy loss that treats all mistakes as equal. We evaluate
HAF on three hierarchical datasets and achieve state-of-the-art results on the
iNaturalist-19 and CIFAR-100 datasets. The source code is available at
https://github.com/07Agarg/HAF |
---|---|
DOI: | 10.48550/arxiv.2207.12646 |