Expert-Level Annotation Quality Achieved by Gamified Crowdsourcing for B-line Segmentation in Lung Ultrasound
Accurate and scalable annotation of medical data is critical for the development of medical AI, but obtaining time for annotation from medical experts is challenging. Gamified crowdsourcing has demonstrated potential for obtaining highly accurate annotations for medical data at scale, and we demonst...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Accurate and scalable annotation of medical data is critical for the
development of medical AI, but obtaining time for annotation from medical
experts is challenging. Gamified crowdsourcing has demonstrated potential for
obtaining highly accurate annotations for medical data at scale, and we
demonstrate the same in this study for the segmentation of B-lines, an
indicator of pulmonary congestion, on still frames within point-of-care lung
ultrasound clips. We collected 21,154 annotations from 214 annotators over 2.5
days, and we demonstrated that the concordance of crowd consensus segmentations
with reference standards exceeds that of individual experts with the same
reference standards, both in terms of B-line count (mean squared error 0.239
vs. 0.308, p |
---|---|
DOI: | 10.48550/arxiv.2312.10198 |