Detecting the occluding contours of the uterus to automatise augmented laparoscopy: score, loss, dataset, evaluation and user study
Purpose The registration of a preoperative 3D model, reconstructed, for example, from MRI, to intraoperative laparoscopy 2D images, is the main challenge to achieve augmented reality in laparoscopy. The current systems have a major limitation: they require that the surgeon manually marks the occludi...
Gespeichert in:
Veröffentlicht in: | International journal for computer assisted radiology and surgery 2020-07, Vol.15 (7), p.1177-1186 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1186 |
---|---|
container_issue | 7 |
container_start_page | 1177 |
container_title | International journal for computer assisted radiology and surgery |
container_volume | 15 |
creator | François, Tom Calvet, Lilian Madad Zadeh, Sabrina Saboul, Damien Gasparini, Simone Samarakoon, Prasad Bourdel, Nicolas Bartoli, Adrien |
description | Purpose
The registration of a preoperative 3D model, reconstructed, for example, from MRI, to intraoperative laparoscopy 2D images, is the main challenge to achieve augmented reality in laparoscopy. The current systems have a major limitation: they require that the surgeon manually marks the occluding contours during surgery. This requires the surgeon to fully comprehend the non-trivial concept of occluding contours and surgeon time, directly impacting acceptance and usability. To overcome this limitation, we propose a complete framework for object-class occluding contour detection (OC2D), with application to uterus surgery.
Methods
Our first contribution is a new distance-based evaluation score complying with all the relevant performance criteria. Our second contribution is a loss function combining cross-entropy and two new penalties designed to boost 1-pixel thickness responses. This allows us to train a U-Net end to end, outperforming all competing methods, which tends to produce thick responses. Our third contribution is a dataset of 3818 carefully labelled laparoscopy images of the uterus, which was used to train and evaluate our detector.
Results
Evaluation shows that the proposed detector has a similar false false-negative rate to existing methods but substantially reduces both false-positive rate and response thickness. Finally, we ran a user study to evaluate the impact of OC2D against manually marked occluding contours in augmented laparoscopy. We used 10 recorded gynecologic laparoscopies and involved 5 surgeons. Using OC2D led to a reduction of 3 min and 53 s in surgeon time without sacrificing registration accuracy.
Conclusions
We provide a new set of criteria and a distance-based measure to evaluate an OC2D method. We propose an OC2D method which outperforms the state-of-the-art methods. The results obtained from the user study indicate that fully automatic augmented laparoscopy is feasible. |
doi_str_mv | 10.1007/s11548-020-02151-w |
format | Article |
fullrecord | <record><control><sourceid>proquest_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_02884670v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2417506692</sourcerecordid><originalsourceid>FETCH-LOGICAL-c453t-661fe375e2f4df4173dd7182d5fd9396aed0e8f4303c2d24deab8426992bdf3f3</originalsourceid><addsrcrecordid>eNp9kU1v1DAQhi0EomXhD3BAlriAtAF_J-FWlY8ircQFzpbXHm9TJfHij1Z75o_jbcoiceBgzdh-5vV4XoReUvKOEtK-T5RK0TWEkbqopM3dI3ROO0UbJVj_-JRTcoaepXRDiJAtl0_RGWe8ZbyT5-jXR8hg8zDvcL4GHKwdizvubJhzKDHh4O9vSoZYEs4Bm5LDZPKQoKa7CeYMDo9mb2JINuwPH3ANEdZ4DCmtsTPZJMhrDLdmLLUuzNjMDpcEEadc3OE5euLNmODFQ1yhH58_fb-8ajbfvny9vNg0VkieG6WoB95KYF44L2jLnWtpx5z0rue9MuAIdF5wwi1zTDgw204w1fds6zz3fIXeLrrXZtT7OEwmHnQwg7662OjjGWFdJ1RLbmll3yzsPoafBVLW05AsjKOZIZSkGa-yvBekrejrf9CbOri5_kSz2qUkSlV0hdhC2TqmFMGfOqBEH-3Ui521CaLv7dR3tejVg3TZTuBOJX_8qwBfgFSv5h3Ev2__R_Y3ctysew</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2417506692</pqid></control><display><type>article</type><title>Detecting the occluding contours of the uterus to automatise augmented laparoscopy: score, loss, dataset, evaluation and user study</title><source>SpringerLink Journals - AutoHoldings</source><creator>François, Tom ; Calvet, Lilian ; Madad Zadeh, Sabrina ; Saboul, Damien ; Gasparini, Simone ; Samarakoon, Prasad ; Bourdel, Nicolas ; Bartoli, Adrien</creator><creatorcontrib>François, Tom ; Calvet, Lilian ; Madad Zadeh, Sabrina ; Saboul, Damien ; Gasparini, Simone ; Samarakoon, Prasad ; Bourdel, Nicolas ; Bartoli, Adrien</creatorcontrib><description>Purpose
The registration of a preoperative 3D model, reconstructed, for example, from MRI, to intraoperative laparoscopy 2D images, is the main challenge to achieve augmented reality in laparoscopy. The current systems have a major limitation: they require that the surgeon manually marks the occluding contours during surgery. This requires the surgeon to fully comprehend the non-trivial concept of occluding contours and surgeon time, directly impacting acceptance and usability. To overcome this limitation, we propose a complete framework for object-class occluding contour detection (OC2D), with application to uterus surgery.
Methods
Our first contribution is a new distance-based evaluation score complying with all the relevant performance criteria. Our second contribution is a loss function combining cross-entropy and two new penalties designed to boost 1-pixel thickness responses. This allows us to train a U-Net end to end, outperforming all competing methods, which tends to produce thick responses. Our third contribution is a dataset of 3818 carefully labelled laparoscopy images of the uterus, which was used to train and evaluate our detector.
Results
Evaluation shows that the proposed detector has a similar false false-negative rate to existing methods but substantially reduces both false-positive rate and response thickness. Finally, we ran a user study to evaluate the impact of OC2D against manually marked occluding contours in augmented laparoscopy. We used 10 recorded gynecologic laparoscopies and involved 5 surgeons. Using OC2D led to a reduction of 3 min and 53 s in surgeon time without sacrificing registration accuracy.
Conclusions
We provide a new set of criteria and a distance-based measure to evaluate an OC2D method. We propose an OC2D method which outperforms the state-of-the-art methods. The results obtained from the user study indicate that fully automatic augmented laparoscopy is feasible.</description><identifier>ISSN: 1861-6410</identifier><identifier>EISSN: 1861-6429</identifier><identifier>DOI: 10.1007/s11548-020-02151-w</identifier><identifier>PMID: 32372385</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Augmented reality ; Computer Imaging ; Computer Science ; Computer Vision and Pattern Recognition ; Contours ; Datasets ; Health Informatics ; Image reconstruction ; Imaging ; Laparoscopy ; Magnetic resonance imaging ; Medical Imaging ; Medicine ; Medicine & Public Health ; Original Article ; Pattern Recognition and Graphics ; Production methods ; Radiology ; Surgeons ; Surgery ; Thickness ; Three dimensional models ; Two dimensional models ; Uterus ; Vision</subject><ispartof>International journal for computer assisted radiology and surgery, 2020-07, Vol.15 (7), p.1177-1186</ispartof><rights>CARS 2020</rights><rights>CARS 2020.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c453t-661fe375e2f4df4173dd7182d5fd9396aed0e8f4303c2d24deab8426992bdf3f3</citedby><cites>FETCH-LOGICAL-c453t-661fe375e2f4df4173dd7182d5fd9396aed0e8f4303c2d24deab8426992bdf3f3</cites><orcidid>0000-0001-8239-8005</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11548-020-02151-w$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11548-020-02151-w$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>230,314,780,784,885,27924,27925,41488,42557,51319</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32372385$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink><backlink>$$Uhttps://hal.science/hal-02884670$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>François, Tom</creatorcontrib><creatorcontrib>Calvet, Lilian</creatorcontrib><creatorcontrib>Madad Zadeh, Sabrina</creatorcontrib><creatorcontrib>Saboul, Damien</creatorcontrib><creatorcontrib>Gasparini, Simone</creatorcontrib><creatorcontrib>Samarakoon, Prasad</creatorcontrib><creatorcontrib>Bourdel, Nicolas</creatorcontrib><creatorcontrib>Bartoli, Adrien</creatorcontrib><title>Detecting the occluding contours of the uterus to automatise augmented laparoscopy: score, loss, dataset, evaluation and user study</title><title>International journal for computer assisted radiology and surgery</title><addtitle>Int J CARS</addtitle><addtitle>Int J Comput Assist Radiol Surg</addtitle><description>Purpose
The registration of a preoperative 3D model, reconstructed, for example, from MRI, to intraoperative laparoscopy 2D images, is the main challenge to achieve augmented reality in laparoscopy. The current systems have a major limitation: they require that the surgeon manually marks the occluding contours during surgery. This requires the surgeon to fully comprehend the non-trivial concept of occluding contours and surgeon time, directly impacting acceptance and usability. To overcome this limitation, we propose a complete framework for object-class occluding contour detection (OC2D), with application to uterus surgery.
Methods
Our first contribution is a new distance-based evaluation score complying with all the relevant performance criteria. Our second contribution is a loss function combining cross-entropy and two new penalties designed to boost 1-pixel thickness responses. This allows us to train a U-Net end to end, outperforming all competing methods, which tends to produce thick responses. Our third contribution is a dataset of 3818 carefully labelled laparoscopy images of the uterus, which was used to train and evaluate our detector.
Results
Evaluation shows that the proposed detector has a similar false false-negative rate to existing methods but substantially reduces both false-positive rate and response thickness. Finally, we ran a user study to evaluate the impact of OC2D against manually marked occluding contours in augmented laparoscopy. We used 10 recorded gynecologic laparoscopies and involved 5 surgeons. Using OC2D led to a reduction of 3 min and 53 s in surgeon time without sacrificing registration accuracy.
Conclusions
We provide a new set of criteria and a distance-based measure to evaluate an OC2D method. We propose an OC2D method which outperforms the state-of-the-art methods. The results obtained from the user study indicate that fully automatic augmented laparoscopy is feasible.</description><subject>Augmented reality</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Computer Vision and Pattern Recognition</subject><subject>Contours</subject><subject>Datasets</subject><subject>Health Informatics</subject><subject>Image reconstruction</subject><subject>Imaging</subject><subject>Laparoscopy</subject><subject>Magnetic resonance imaging</subject><subject>Medical Imaging</subject><subject>Medicine</subject><subject>Medicine & Public Health</subject><subject>Original Article</subject><subject>Pattern Recognition and Graphics</subject><subject>Production methods</subject><subject>Radiology</subject><subject>Surgeons</subject><subject>Surgery</subject><subject>Thickness</subject><subject>Three dimensional models</subject><subject>Two dimensional models</subject><subject>Uterus</subject><subject>Vision</subject><issn>1861-6410</issn><issn>1861-6429</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp9kU1v1DAQhi0EomXhD3BAlriAtAF_J-FWlY8ircQFzpbXHm9TJfHij1Z75o_jbcoiceBgzdh-5vV4XoReUvKOEtK-T5RK0TWEkbqopM3dI3ROO0UbJVj_-JRTcoaepXRDiJAtl0_RGWe8ZbyT5-jXR8hg8zDvcL4GHKwdizvubJhzKDHh4O9vSoZYEs4Bm5LDZPKQoKa7CeYMDo9mb2JINuwPH3ANEdZ4DCmtsTPZJMhrDLdmLLUuzNjMDpcEEadc3OE5euLNmODFQ1yhH58_fb-8ajbfvny9vNg0VkieG6WoB95KYF44L2jLnWtpx5z0rue9MuAIdF5wwi1zTDgw204w1fds6zz3fIXeLrrXZtT7OEwmHnQwg7662OjjGWFdJ1RLbmll3yzsPoafBVLW05AsjKOZIZSkGa-yvBekrejrf9CbOri5_kSz2qUkSlV0hdhC2TqmFMGfOqBEH-3Ui521CaLv7dR3tejVg3TZTuBOJX_8qwBfgFSv5h3Ev2__R_Y3ctysew</recordid><startdate>20200701</startdate><enddate>20200701</enddate><creator>François, Tom</creator><creator>Calvet, Lilian</creator><creator>Madad Zadeh, Sabrina</creator><creator>Saboul, Damien</creator><creator>Gasparini, Simone</creator><creator>Samarakoon, Prasad</creator><creator>Bourdel, Nicolas</creator><creator>Bartoli, Adrien</creator><general>Springer International Publishing</general><general>Springer Nature B.V</general><general>Springer Verlag</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0001-8239-8005</orcidid></search><sort><creationdate>20200701</creationdate><title>Detecting the occluding contours of the uterus to automatise augmented laparoscopy: score, loss, dataset, evaluation and user study</title><author>François, Tom ; Calvet, Lilian ; Madad Zadeh, Sabrina ; Saboul, Damien ; Gasparini, Simone ; Samarakoon, Prasad ; Bourdel, Nicolas ; Bartoli, Adrien</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c453t-661fe375e2f4df4173dd7182d5fd9396aed0e8f4303c2d24deab8426992bdf3f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Augmented reality</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Computer Vision and Pattern Recognition</topic><topic>Contours</topic><topic>Datasets</topic><topic>Health Informatics</topic><topic>Image reconstruction</topic><topic>Imaging</topic><topic>Laparoscopy</topic><topic>Magnetic resonance imaging</topic><topic>Medical Imaging</topic><topic>Medicine</topic><topic>Medicine & Public Health</topic><topic>Original Article</topic><topic>Pattern Recognition and Graphics</topic><topic>Production methods</topic><topic>Radiology</topic><topic>Surgeons</topic><topic>Surgery</topic><topic>Thickness</topic><topic>Three dimensional models</topic><topic>Two dimensional models</topic><topic>Uterus</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>François, Tom</creatorcontrib><creatorcontrib>Calvet, Lilian</creatorcontrib><creatorcontrib>Madad Zadeh, Sabrina</creatorcontrib><creatorcontrib>Saboul, Damien</creatorcontrib><creatorcontrib>Gasparini, Simone</creatorcontrib><creatorcontrib>Samarakoon, Prasad</creatorcontrib><creatorcontrib>Bourdel, Nicolas</creatorcontrib><creatorcontrib>Bartoli, Adrien</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>International journal for computer assisted radiology and surgery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>François, Tom</au><au>Calvet, Lilian</au><au>Madad Zadeh, Sabrina</au><au>Saboul, Damien</au><au>Gasparini, Simone</au><au>Samarakoon, Prasad</au><au>Bourdel, Nicolas</au><au>Bartoli, Adrien</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Detecting the occluding contours of the uterus to automatise augmented laparoscopy: score, loss, dataset, evaluation and user study</atitle><jtitle>International journal for computer assisted radiology and surgery</jtitle><stitle>Int J CARS</stitle><addtitle>Int J Comput Assist Radiol Surg</addtitle><date>2020-07-01</date><risdate>2020</risdate><volume>15</volume><issue>7</issue><spage>1177</spage><epage>1186</epage><pages>1177-1186</pages><issn>1861-6410</issn><eissn>1861-6429</eissn><abstract>Purpose
The registration of a preoperative 3D model, reconstructed, for example, from MRI, to intraoperative laparoscopy 2D images, is the main challenge to achieve augmented reality in laparoscopy. The current systems have a major limitation: they require that the surgeon manually marks the occluding contours during surgery. This requires the surgeon to fully comprehend the non-trivial concept of occluding contours and surgeon time, directly impacting acceptance and usability. To overcome this limitation, we propose a complete framework for object-class occluding contour detection (OC2D), with application to uterus surgery.
Methods
Our first contribution is a new distance-based evaluation score complying with all the relevant performance criteria. Our second contribution is a loss function combining cross-entropy and two new penalties designed to boost 1-pixel thickness responses. This allows us to train a U-Net end to end, outperforming all competing methods, which tends to produce thick responses. Our third contribution is a dataset of 3818 carefully labelled laparoscopy images of the uterus, which was used to train and evaluate our detector.
Results
Evaluation shows that the proposed detector has a similar false false-negative rate to existing methods but substantially reduces both false-positive rate and response thickness. Finally, we ran a user study to evaluate the impact of OC2D against manually marked occluding contours in augmented laparoscopy. We used 10 recorded gynecologic laparoscopies and involved 5 surgeons. Using OC2D led to a reduction of 3 min and 53 s in surgeon time without sacrificing registration accuracy.
Conclusions
We provide a new set of criteria and a distance-based measure to evaluate an OC2D method. We propose an OC2D method which outperforms the state-of-the-art methods. The results obtained from the user study indicate that fully automatic augmented laparoscopy is feasible.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><pmid>32372385</pmid><doi>10.1007/s11548-020-02151-w</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0001-8239-8005</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1861-6410 |
ispartof | International journal for computer assisted radiology and surgery, 2020-07, Vol.15 (7), p.1177-1186 |
issn | 1861-6410 1861-6429 |
language | eng |
recordid | cdi_hal_primary_oai_HAL_hal_02884670v1 |
source | SpringerLink Journals - AutoHoldings |
subjects | Augmented reality Computer Imaging Computer Science Computer Vision and Pattern Recognition Contours Datasets Health Informatics Image reconstruction Imaging Laparoscopy Magnetic resonance imaging Medical Imaging Medicine Medicine & Public Health Original Article Pattern Recognition and Graphics Production methods Radiology Surgeons Surgery Thickness Three dimensional models Two dimensional models Uterus Vision |
title | Detecting the occluding contours of the uterus to automatise augmented laparoscopy: score, loss, dataset, evaluation and user study |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T02%3A17%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Detecting%20the%20occluding%20contours%20of%20the%20uterus%20to%20automatise%20augmented%20laparoscopy:%20score,%20loss,%20dataset,%20evaluation%20and%20user%20study&rft.jtitle=International%20journal%20for%20computer%20assisted%20radiology%20and%20surgery&rft.au=Fran%C3%A7ois,%20Tom&rft.date=2020-07-01&rft.volume=15&rft.issue=7&rft.spage=1177&rft.epage=1186&rft.pages=1177-1186&rft.issn=1861-6410&rft.eissn=1861-6429&rft_id=info:doi/10.1007/s11548-020-02151-w&rft_dat=%3Cproquest_hal_p%3E2417506692%3C/proquest_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2417506692&rft_id=info:pmid/32372385&rfr_iscdi=true |