Conformal Predictions for Probabilistically Robust Scalable Machine Learning Classification
Conformal predictions make it possible to define reliable and robust learning algorithms. But they are essentially a method for evaluating whether an algorithm is good enough to be used in practice. To define a reliable learning framework for classification from the very beginning of its design, the...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Conformal predictions make it possible to define reliable and robust learning
algorithms. But they are essentially a method for evaluating whether an
algorithm is good enough to be used in practice. To define a reliable learning
framework for classification from the very beginning of its design, the concept
of scalable classifier was introduced to generalize the concept of classical
classifier by linking it to statistical order theory and probabilistic learning
theory. In this paper, we analyze the similarities between scalable classifiers
and conformal predictions by introducing a new definition of a score function
and defining a special set of input variables, the conformal safety set, which
can identify patterns in the input space that satisfy the error coverage
guarantee, i.e., that the probability of observing the wrong (possibly unsafe)
label for points belonging to this set is bounded by a predefined $\varepsilon$
error level. We demonstrate the practical implications of this framework
through an application in cybersecurity for identifying DNS tunneling attacks.
Our work contributes to the development of probabilistically robust and
reliable machine learning models. |
---|---|
DOI: | 10.48550/arxiv.2403.10368 |