SimCol3D — 3D reconstruction during colonoscopy challenge
Colorectal cancer is one of the most common cancers in the world. While colonoscopy is an effective screening technique, navigating an endoscope through the colon to detect polyps is challenging. A 3D map of the observed surfaces could enhance the identification of unscreened colon tissue and serve...
Gespeichert in:
Veröffentlicht in: | Medical image analysis 2024-08, Vol.96, p.103195, Article 103195 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Colorectal cancer is one of the most common cancers in the world. While colonoscopy is an effective screening technique, navigating an endoscope through the colon to detect polyps is challenging. A 3D map of the observed surfaces could enhance the identification of unscreened colon tissue and serve as a training platform. However, reconstructing the colon from video footage remains difficult. Learning-based approaches hold promise as robust alternatives, but necessitate extensive datasets. Establishing a benchmark dataset, the 2022 EndoVis sub-challenge SimCol3D aimed to facilitate data-driven depth and pose prediction during colonoscopy. The challenge was hosted as part of MICCAI 2022 in Singapore. Six teams from around the world and representatives from academia and industry participated in the three sub-challenges: synthetic depth prediction, synthetic pose prediction, and real pose prediction. This paper describes the challenge, the submitted methods, and their results. We show that depth prediction from synthetic colonoscopy images is robustly solvable, while pose estimation remains an open research question.
•SimCol3D facilitates research in colon reconstruction to improve cancer screening.•SimCol3D is the first challenge providing synthetic and real colonoscopy data.•A detailed description and comparison of participants’ methods.•An analysis of best-performing models.•A discussion of the current state of the field and future research directions. |
---|---|
ISSN: | 1361-8415 1361-8423 1361-8423 |
DOI: | 10.1016/j.media.2024.103195 |