Exploring Fringe Settings of SVMs for Classification
There are many practical applications where learning from single class examples is either, the only possible solution, or has a distinct performance advantage. The first case occurs when obtaining examples of a second class is difficult, e.g., classifying sites of “interest” based on web accesses. T...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Buchkapitel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | There are many practical applications where learning from single class examples is either, the only possible solution, or has a distinct performance advantage. The first case occurs when obtaining examples of a second class is difficult, e.g., classifying sites of “interest” based on web accesses. The second situation is exemplified by the one-class support vector machine which was the winning submission of the second task of the KDD Cup 2002.
This paper explores the limits of supervised learning using both positive and negative examples. To this end, we analyse the KDD Cup dataset using four classifiers (support vector machines and ridge regression) and several feature selection methods. Our analysis shows that there is a consistent pattern of performance differences between one and two-class learning for all algorithms investigated, and these patterns persist even with aggressive dimensionality reduction through automated feature selection. Using insight gained from the above analysis, we generate synthetic data showing similar pattern of performance. |
---|---|
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-540-39804-2_26 |