Weakly-Supervised Cross-Domain Segmentation of Electron Microscopy with Sparse Point Annotation
Accurate segmentation of organelle instances from electron microscopy (EM) images plays an essential role in many neuroscience researches. However, practical scenarios usually suffer from high annotation costs, label scarcity, and large domain diversity. While unsupervised domain adaptation (UDA) th...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Accurate segmentation of organelle instances from electron microscopy (EM)
images plays an essential role in many neuroscience researches. However,
practical scenarios usually suffer from high annotation costs, label scarcity,
and large domain diversity. While unsupervised domain adaptation (UDA) that
assumes no annotation effort on the target data is promising to alleviate these
challenges, its performance on complicated segmentation tasks is still far from
practical usage. To address these issues, we investigate a highly
annotation-efficient weak supervision, which assumes only sparse center-points
on a small subset of object instances in the target training images. To achieve
accurate segmentation with partial point annotations, we introduce instance
counting and center detection as auxiliary tasks and design a multitask
learning framework to leverage correlations among the counting, detection, and
segmentation, which are all tasks with partial or no supervision. Building upon
the different domain-invariances of the three tasks, we enforce counting
estimation with a novel soft consistency loss as a global prior for center
detection, which further guides the per-pixel segmentation. To further
compensate for annotation sparsity, we develop a cross-position cut-and-paste
for label augmentation and an entropy-based pseudo-label selection. The
experimental results highlight that, by simply using extremely weak annotation,
e.g., 15\% sparse points, for model training, the proposed model is capable of
significantly outperforming UDA methods and produces comparable performance as
the supervised counterpart. The high robustness of our model shown in the
validations and the low requirement of expert knowledge for sparse point
annotation further improve the potential application value of our model. |
---|---|
DOI: | 10.48550/arxiv.2404.00667 |