Rethinking Generalization: The Impact of Annotation Style on Medical Image Segmentation
Machine.Learning.for.Biomedical.Imaging. 1 (2022) Generalization is an important attribute of machine learning models, particularly for those that are to be deployed in a medical context, where unreliable predictions can have real world consequences. While the failure of models to generalize across...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine.Learning.for.Biomedical.Imaging. 1 (2022) Generalization is an important attribute of machine learning models,
particularly for those that are to be deployed in a medical context, where
unreliable predictions can have real world consequences. While the failure of
models to generalize across datasets is typically attributed to a mismatch in
the data distributions, performance gaps are often a consequence of biases in
the 'ground-truth' label annotations. This is particularly important in the
context of medical image segmentation of pathological structures (e.g.
lesions), where the annotation process is much more subjective, and affected by
a number underlying factors, including the annotation protocol, rater
education/experience, and clinical aims, among others. In this paper, we show
that modeling annotation biases, rather than ignoring them, poses a promising
way of accounting for differences in annotation style across datasets. To this
end, we propose a generalized conditioning framework to (1) learn and account
for different annotation styles across multiple datasets using a single model,
(2) identify similar annotation styles across different datasets in order to
permit their effective aggregation, and (3) fine-tune a fully trained model to
a new annotation style with just a few samples. Next, we present an
image-conditioning approach to model annotation styles that correlate with
specific image features, potentially enabling detection biases to be more
easily identified. |
---|---|
DOI: | 10.48550/arxiv.2210.17398 |