Driving down Poisson error can offset classification error in clinical tasks
Medical machine learning algorithms are typically evaluated based on accuracy vs. a clinician-defined ground truth, a reasonable initial choice since trained clinicians are usually better classifiers than ML models. However, this metric does not fully capture the actual clinical task: it neglects th...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Medical machine learning algorithms are typically evaluated based on accuracy
vs. a clinician-defined ground truth, a reasonable initial choice since trained
clinicians are usually better classifiers than ML models. However, this metric
does not fully capture the actual clinical task: it neglects the fact that
humans, even with perfect accuracy, are subject to non-trivial error from the
Poisson statistics of rare events, because clinical protocols often specify a
relatively small sample size. For example, to quantitate malaria on a thin
blood film a clinician examines only 2000 red blood cells (0.0004 uL), which
can yield large Poisson variation in the actual number of parasites present, so
that a perfect human's count can differ substantially from the true average
load. In contrast, an ML system may be less accurate on an object level, but it
may also have the option to examine more blood (e.g. 0.1 uL, or 250x). Then
while its parasite identification error is higher, the Poisson variability of
its estimate is lower due to larger sample size.
To qualify for clinical deployment, an ML system's performance must match
current standard of care, typically a very demanding target. To achieve this,
it may be possible to offset the ML system's lower accuracy by increasing its
sample size to reduce Poisson error, and thus attain the same net clinical
performance as a perfectly accurate human limited by smaller sample size. In
this paper, we analyse the mathematics of the relationship between Poisson
error, classification error, and total error. This mathematical toolkit enables
teams optimizing ML systems to leverage a relative strength (larger sample
sizes) to offset a relative weakness (classification accuracy). We illustrate
the methods with two concrete examples: diagnosis and quantitation of malaria
on blood films. |
---|---|
DOI: | 10.48550/arxiv.2405.06065 |