Surgical Visual Domain Adaptation: Results from the MICCAI 2020 SurgVisDom Challenge

Surgical data science is revolutionizing minimally invasive surgery by enabling context-aware applications. However, many challenges exist around surgical data (and health data, more generally) needed to develop context-aware models. This work - presented as part of the Endoscopic Vision (EndoVis) c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zia, Aneeq, Bhattacharyya, Kiran, Liu, Xi, Wang, Ziheng, Kondo, Satoshi, Colleoni, Emanuele, van Amsterdam, Beatrice, Hussain, Razeen, Hussain, Raabid, Maier-Hein, Lena, Stoyanov, Danail, Speidel, Stefanie, Jarc, Anthony
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zia, Aneeq
Bhattacharyya, Kiran
Liu, Xi
Wang, Ziheng
Kondo, Satoshi
Colleoni, Emanuele
van Amsterdam, Beatrice
Hussain, Razeen
Hussain, Raabid
Maier-Hein, Lena
Stoyanov, Danail
Speidel, Stefanie
Jarc, Anthony
description Surgical data science is revolutionizing minimally invasive surgery by enabling context-aware applications. However, many challenges exist around surgical data (and health data, more generally) needed to develop context-aware models. This work - presented as part of the Endoscopic Vision (EndoVis) challenge at the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2020 conference - seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy concerns. In particular, we propose to use video from virtual reality (VR) simulations of surgical exercises in robotic-assisted surgery to develop algorithms to recognize tasks in a clinical-like setting. We present the performance of the different approaches to solve visual domain adaptation developed by challenge participants. Our analysis shows that the presented models were unable to learn meaningful motion based features form VR data alone, but did significantly better when small amount of clinical-like data was also made available. Based on these results, we discuss promising methods and further work to address the problem of visual domain adaptation in surgical data science. We also release the challenge dataset publicly at https://www.synapse.org/surgvisdom2020.
doi_str_mv 10.48550/arxiv.2102.13644
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2102_13644</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2102_13644</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-33e51afdacad0773d13cc5f1062671e3277df4e9ddfb01eb4677eba2cf78370f3</originalsourceid><addsrcrecordid>eNotj8lOwzAUAH3hgAofwKn-gQRvsQu3yGyRipBoxDV68dJacpLKSRD8PWnLaU4z0iB0R0kuNkVB7iH9hO-cUcJyyqUQ16jezWkfDET8FcZ5wdPQQehxaeE4wRSG_hF_unGO04h9Gjo8HRx-r7QuK8wII_jkL-qiYX2AGF2_dzfoykMc3e0_V6h-ea71W7b9eK10uc1AKpFx7goK3oIBS5TilnJjCk-JZFJRx5lS1gv3YK1vCXWtkEq5FpjxasMV8XyF1pfseas5ptBB-m1Oe815j_8BmstJzw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Surgical Visual Domain Adaptation: Results from the MICCAI 2020 SurgVisDom Challenge</title><source>arXiv.org</source><creator>Zia, Aneeq ; Bhattacharyya, Kiran ; Liu, Xi ; Wang, Ziheng ; Kondo, Satoshi ; Colleoni, Emanuele ; van Amsterdam, Beatrice ; Hussain, Razeen ; Hussain, Raabid ; Maier-Hein, Lena ; Stoyanov, Danail ; Speidel, Stefanie ; Jarc, Anthony</creator><creatorcontrib>Zia, Aneeq ; Bhattacharyya, Kiran ; Liu, Xi ; Wang, Ziheng ; Kondo, Satoshi ; Colleoni, Emanuele ; van Amsterdam, Beatrice ; Hussain, Razeen ; Hussain, Raabid ; Maier-Hein, Lena ; Stoyanov, Danail ; Speidel, Stefanie ; Jarc, Anthony</creatorcontrib><description>Surgical data science is revolutionizing minimally invasive surgery by enabling context-aware applications. However, many challenges exist around surgical data (and health data, more generally) needed to develop context-aware models. This work - presented as part of the Endoscopic Vision (EndoVis) challenge at the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2020 conference - seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy concerns. In particular, we propose to use video from virtual reality (VR) simulations of surgical exercises in robotic-assisted surgery to develop algorithms to recognize tasks in a clinical-like setting. We present the performance of the different approaches to solve visual domain adaptation developed by challenge participants. Our analysis shows that the presented models were unable to learn meaningful motion based features form VR data alone, but did significantly better when small amount of clinical-like data was also made available. Based on these results, we discuss promising methods and further work to address the problem of visual domain adaptation in surgical data science. We also release the challenge dataset publicly at https://www.synapse.org/surgvisdom2020.</description><identifier>DOI: 10.48550/arxiv.2102.13644</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2021-02</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2102.13644$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2102.13644$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zia, Aneeq</creatorcontrib><creatorcontrib>Bhattacharyya, Kiran</creatorcontrib><creatorcontrib>Liu, Xi</creatorcontrib><creatorcontrib>Wang, Ziheng</creatorcontrib><creatorcontrib>Kondo, Satoshi</creatorcontrib><creatorcontrib>Colleoni, Emanuele</creatorcontrib><creatorcontrib>van Amsterdam, Beatrice</creatorcontrib><creatorcontrib>Hussain, Razeen</creatorcontrib><creatorcontrib>Hussain, Raabid</creatorcontrib><creatorcontrib>Maier-Hein, Lena</creatorcontrib><creatorcontrib>Stoyanov, Danail</creatorcontrib><creatorcontrib>Speidel, Stefanie</creatorcontrib><creatorcontrib>Jarc, Anthony</creatorcontrib><title>Surgical Visual Domain Adaptation: Results from the MICCAI 2020 SurgVisDom Challenge</title><description>Surgical data science is revolutionizing minimally invasive surgery by enabling context-aware applications. However, many challenges exist around surgical data (and health data, more generally) needed to develop context-aware models. This work - presented as part of the Endoscopic Vision (EndoVis) challenge at the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2020 conference - seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy concerns. In particular, we propose to use video from virtual reality (VR) simulations of surgical exercises in robotic-assisted surgery to develop algorithms to recognize tasks in a clinical-like setting. We present the performance of the different approaches to solve visual domain adaptation developed by challenge participants. Our analysis shows that the presented models were unable to learn meaningful motion based features form VR data alone, but did significantly better when small amount of clinical-like data was also made available. Based on these results, we discuss promising methods and further work to address the problem of visual domain adaptation in surgical data science. We also release the challenge dataset publicly at https://www.synapse.org/surgvisdom2020.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8lOwzAUAH3hgAofwKn-gQRvsQu3yGyRipBoxDV68dJacpLKSRD8PWnLaU4z0iB0R0kuNkVB7iH9hO-cUcJyyqUQ16jezWkfDET8FcZ5wdPQQehxaeE4wRSG_hF_unGO04h9Gjo8HRx-r7QuK8wII_jkL-qiYX2AGF2_dzfoykMc3e0_V6h-ea71W7b9eK10uc1AKpFx7goK3oIBS5TilnJjCk-JZFJRx5lS1gv3YK1vCXWtkEq5FpjxasMV8XyF1pfseas5ptBB-m1Oe815j_8BmstJzw</recordid><startdate>20210226</startdate><enddate>20210226</enddate><creator>Zia, Aneeq</creator><creator>Bhattacharyya, Kiran</creator><creator>Liu, Xi</creator><creator>Wang, Ziheng</creator><creator>Kondo, Satoshi</creator><creator>Colleoni, Emanuele</creator><creator>van Amsterdam, Beatrice</creator><creator>Hussain, Razeen</creator><creator>Hussain, Raabid</creator><creator>Maier-Hein, Lena</creator><creator>Stoyanov, Danail</creator><creator>Speidel, Stefanie</creator><creator>Jarc, Anthony</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210226</creationdate><title>Surgical Visual Domain Adaptation: Results from the MICCAI 2020 SurgVisDom Challenge</title><author>Zia, Aneeq ; Bhattacharyya, Kiran ; Liu, Xi ; Wang, Ziheng ; Kondo, Satoshi ; Colleoni, Emanuele ; van Amsterdam, Beatrice ; Hussain, Razeen ; Hussain, Raabid ; Maier-Hein, Lena ; Stoyanov, Danail ; Speidel, Stefanie ; Jarc, Anthony</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-33e51afdacad0773d13cc5f1062671e3277df4e9ddfb01eb4677eba2cf78370f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Zia, Aneeq</creatorcontrib><creatorcontrib>Bhattacharyya, Kiran</creatorcontrib><creatorcontrib>Liu, Xi</creatorcontrib><creatorcontrib>Wang, Ziheng</creatorcontrib><creatorcontrib>Kondo, Satoshi</creatorcontrib><creatorcontrib>Colleoni, Emanuele</creatorcontrib><creatorcontrib>van Amsterdam, Beatrice</creatorcontrib><creatorcontrib>Hussain, Razeen</creatorcontrib><creatorcontrib>Hussain, Raabid</creatorcontrib><creatorcontrib>Maier-Hein, Lena</creatorcontrib><creatorcontrib>Stoyanov, Danail</creatorcontrib><creatorcontrib>Speidel, Stefanie</creatorcontrib><creatorcontrib>Jarc, Anthony</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zia, Aneeq</au><au>Bhattacharyya, Kiran</au><au>Liu, Xi</au><au>Wang, Ziheng</au><au>Kondo, Satoshi</au><au>Colleoni, Emanuele</au><au>van Amsterdam, Beatrice</au><au>Hussain, Razeen</au><au>Hussain, Raabid</au><au>Maier-Hein, Lena</au><au>Stoyanov, Danail</au><au>Speidel, Stefanie</au><au>Jarc, Anthony</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Surgical Visual Domain Adaptation: Results from the MICCAI 2020 SurgVisDom Challenge</atitle><date>2021-02-26</date><risdate>2021</risdate><abstract>Surgical data science is revolutionizing minimally invasive surgery by enabling context-aware applications. However, many challenges exist around surgical data (and health data, more generally) needed to develop context-aware models. This work - presented as part of the Endoscopic Vision (EndoVis) challenge at the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2020 conference - seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy concerns. In particular, we propose to use video from virtual reality (VR) simulations of surgical exercises in robotic-assisted surgery to develop algorithms to recognize tasks in a clinical-like setting. We present the performance of the different approaches to solve visual domain adaptation developed by challenge participants. Our analysis shows that the presented models were unable to learn meaningful motion based features form VR data alone, but did significantly better when small amount of clinical-like data was also made available. Based on these results, we discuss promising methods and further work to address the problem of visual domain adaptation in surgical data science. We also release the challenge dataset publicly at https://www.synapse.org/surgvisdom2020.</abstract><doi>10.48550/arxiv.2102.13644</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2102.13644
ispartof
issn
language eng
recordid cdi_arxiv_primary_2102_13644
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Surgical Visual Domain Adaptation: Results from the MICCAI 2020 SurgVisDom Challenge
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T19%3A57%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Surgical%20Visual%20Domain%20Adaptation:%20Results%20from%20the%20MICCAI%202020%20SurgVisDom%20Challenge&rft.au=Zia,%20Aneeq&rft.date=2021-02-26&rft_id=info:doi/10.48550/arxiv.2102.13644&rft_dat=%3Carxiv_GOX%3E2102_13644%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true