Finding Label and Model Errors in Perception Data With Learned Observation Assertions
SIGMOD 2022 ML is being deployed in complex, real-world scenarios where errors have impactful consequences. In these systems, thorough testing of the ML pipelines is critical. A key component in ML deployment pipelines is the curation of labeled training data. Common practice in the ML literature as...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | SIGMOD 2022 ML is being deployed in complex, real-world scenarios where errors have
impactful consequences. In these systems, thorough testing of the ML pipelines
is critical. A key component in ML deployment pipelines is the curation of
labeled training data. Common practice in the ML literature assumes that labels
are the ground truth. However, in our experience in a large autonomous vehicle
development center, we have found that vendors can often provide erroneous
labels, which can lead to downstream safety risks in trained models.
To address these issues, we propose a new abstraction, learned observation
assertions, and implement it in a system called Fixy. Fixy leverages existing
organizational resources, such as existing (possibly noisy) labeled datasets or
previously trained ML models, to learn a probabilistic model for finding errors
in human- or model-generated labels. Given user-provided features and these
existing resources, Fixy learns feature distributions that specify likely and
unlikely values (e.g., that a speed of 30mph is likely but 300mph is unlikely).
It then uses these feature distributions to score labels for potential errors.
We show that FIxy can automatically rank potential errors in real datasets with
up to 2$\times$ higher precision compared to recent work on model assertions
and standard techniques such as uncertainty sampling. |
---|---|
DOI: | 10.48550/arxiv.2201.05797 |