Mutually Improved Endoscopic Image Synthesis and Landmark Detection in Unpaired Image-to-Image Translation
The CycleGAN framework allows for unsupervised image-to-image translation of unpaired data. In a scenario of surgical training on a physical surgical simulator, this method can be used to transform endoscopic images of phantoms into images which more closely resemble the intra-operative appearance o...
Gespeichert in:
Veröffentlicht in: | IEEE journal of biomedical and health informatics 2022-01, Vol.26 (1), p.127-138 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 138 |
---|---|
container_issue | 1 |
container_start_page | 127 |
container_title | IEEE journal of biomedical and health informatics |
container_volume | 26 |
creator | Sharan, Lalith Romano, Gabriele Koehler, Sven Kelm, Halvar Karck, Matthias De Simone, Raffaele Engelhardt, Sandy |
description | The CycleGAN framework allows for unsupervised image-to-image translation of unpaired data. In a scenario of surgical training on a physical surgical simulator, this method can be used to transform endoscopic images of phantoms into images which more closely resemble the intra-operative appearance of the same surgical target structure. This can be viewed as a novel augmented reality approach, which we coined Hyperrealism in previous work. In this use case, it is of paramount importance to display objects like needles, sutures or instruments consistent in both domains while altering the style to a more tissue-like appearance. Segmentation of these objects would allow for a direct transfer, however, contouring of these, partly tiny and thin foreground objects is cumbersome and perhaps inaccurate. Instead, we propose to use landmark detection on the points when sutures pass into the tissue. This objective is directly incorporated into a CycleGAN framework by treating the performance of pre-trained detector models as an additional optimization goal. We show that a task defined on these sparse landmark labels improves consistency of synthesis by the generator network in both domains. Comparing a baseline CycleGAN architecture to our proposed extension ( DetCycleGAN ), mean precision (PPV) improved by +61.32, mean sensitivity (TPR) by +37.91, and mean F_1 score by +0.4743. Furthermore, it could be shown that by dataset fusion, generated intra-operative images can be leveraged as additional training data for the detection network itself. |
doi_str_mv | 10.1109/JBHI.2021.3099858 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_34310335</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9496194</ieee_id><sourcerecordid>2621066021</sourcerecordid><originalsourceid>FETCH-LOGICAL-c349t-454205acb61532e2a76cf22df4bc1681ac0653401f163e55ef11496f94ef1b373</originalsourceid><addsrcrecordid>eNpdkctOwzAQRS0EohX0AxASisSGTYrfjZe8KSpiAawt15lASuqUOEHq3zOljwVe2KPxuaOZuYScMDpkjJrLp-vH8ZBTzoaCGpOpbI_0OdNZyjnN9rcxM7JHBjHOKJ4MU0Yfkp6QglEhVJ_Mnru2c1W1TMbzRVP_QJ7chbyOvl6UHnPuA5LXZWg_IZYxcSFPJnjNXfOV3EILvi3rkJQheQ8LVzao_pOkbZ2utW-NC7FyK-yYHBSuijDYvEfk_f7u7eYxnbw8jG-uJqkX0rSpVJJT5fxUMyU4cDfSvuA8L-TUY__MeaqVkJQVTAtQCgrGpNGFkRhNxUgckYt1XZznu4PY2nkZPVSVC1B30XKllEZ9xhE9_4fO6q4J2J3lmjOqNe4XKbamfFPH2EBhF02JK1haRu3KC7vywq68sBsvUHO2qdxN55DvFNvNI3C6BkoA2H0bnAQtE781p4wK</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2621066021</pqid></control><display><type>article</type><title>Mutually Improved Endoscopic Image Synthesis and Landmark Detection in Unpaired Image-to-Image Translation</title><source>IEEE Electronic Library (IEL)</source><creator>Sharan, Lalith ; Romano, Gabriele ; Koehler, Sven ; Kelm, Halvar ; Karck, Matthias ; De Simone, Raffaele ; Engelhardt, Sandy</creator><creatorcontrib>Sharan, Lalith ; Romano, Gabriele ; Koehler, Sven ; Kelm, Halvar ; Karck, Matthias ; De Simone, Raffaele ; Engelhardt, Sandy</creatorcontrib><description><![CDATA[The CycleGAN framework allows for unsupervised image-to-image translation of unpaired data. In a scenario of surgical training on a physical surgical simulator, this method can be used to transform endoscopic images of phantoms into images which more closely resemble the intra-operative appearance of the same surgical target structure. This can be viewed as a novel augmented reality approach, which we coined Hyperrealism in previous work. In this use case, it is of paramount importance to display objects like needles, sutures or instruments consistent in both domains while altering the style to a more tissue-like appearance. Segmentation of these objects would allow for a direct transfer, however, contouring of these, partly tiny and thin foreground objects is cumbersome and perhaps inaccurate. Instead, we propose to use landmark detection on the points when sutures pass into the tissue. This objective is directly incorporated into a CycleGAN framework by treating the performance of pre-trained detector models as an additional optimization goal. We show that a task defined on these sparse landmark labels improves consistency of synthesis by the generator network in both domains. Comparing a baseline CycleGAN architecture to our proposed extension ( DetCycleGAN ), mean precision (PPV) improved by <inline-formula><tex-math notation="LaTeX">+61.32</tex-math></inline-formula>, mean sensitivity (TPR) by <inline-formula><tex-math notation="LaTeX">+37.91</tex-math></inline-formula>, and mean <inline-formula><tex-math notation="LaTeX">F_1</tex-math></inline-formula> score by <inline-formula><tex-math notation="LaTeX">+0.4743</tex-math></inline-formula>. Furthermore, it could be shown that by dataset fusion, generated intra-operative images can be leveraged as additional training data for the detection network itself.]]></description><identifier>ISSN: 2168-2194</identifier><identifier>EISSN: 2168-2208</identifier><identifier>DOI: 10.1109/JBHI.2021.3099858</identifier><identifier>PMID: 34310335</identifier><identifier>CODEN: IJBHA9</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Augmented reality ; Contouring ; CycleGAN ; Domains ; Endoscopy ; Generative adversarial networks ; Humans ; Image Processing, Computer-Assisted - methods ; Image segmentation ; landmark detection ; landmark localization ; Maintenance engineering ; mitral valve repair ; Optimization ; Phantoms, Imaging ; Semantics ; Surgery ; surgical simulation ; surgical training ; Sutures ; Synthesis ; Task analysis ; Training ; Translation ; Valves</subject><ispartof>IEEE journal of biomedical and health informatics, 2022-01, Vol.26 (1), p.127-138</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c349t-454205acb61532e2a76cf22df4bc1681ac0653401f163e55ef11496f94ef1b373</citedby><cites>FETCH-LOGICAL-c349t-454205acb61532e2a76cf22df4bc1681ac0653401f163e55ef11496f94ef1b373</cites><orcidid>0000-0003-4989-8766 ; 0000-0003-0835-042X ; 0000-0001-8816-7654</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9496194$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9496194$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34310335$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Sharan, Lalith</creatorcontrib><creatorcontrib>Romano, Gabriele</creatorcontrib><creatorcontrib>Koehler, Sven</creatorcontrib><creatorcontrib>Kelm, Halvar</creatorcontrib><creatorcontrib>Karck, Matthias</creatorcontrib><creatorcontrib>De Simone, Raffaele</creatorcontrib><creatorcontrib>Engelhardt, Sandy</creatorcontrib><title>Mutually Improved Endoscopic Image Synthesis and Landmark Detection in Unpaired Image-to-Image Translation</title><title>IEEE journal of biomedical and health informatics</title><addtitle>JBHI</addtitle><addtitle>IEEE J Biomed Health Inform</addtitle><description><![CDATA[The CycleGAN framework allows for unsupervised image-to-image translation of unpaired data. In a scenario of surgical training on a physical surgical simulator, this method can be used to transform endoscopic images of phantoms into images which more closely resemble the intra-operative appearance of the same surgical target structure. This can be viewed as a novel augmented reality approach, which we coined Hyperrealism in previous work. In this use case, it is of paramount importance to display objects like needles, sutures or instruments consistent in both domains while altering the style to a more tissue-like appearance. Segmentation of these objects would allow for a direct transfer, however, contouring of these, partly tiny and thin foreground objects is cumbersome and perhaps inaccurate. Instead, we propose to use landmark detection on the points when sutures pass into the tissue. This objective is directly incorporated into a CycleGAN framework by treating the performance of pre-trained detector models as an additional optimization goal. We show that a task defined on these sparse landmark labels improves consistency of synthesis by the generator network in both domains. Comparing a baseline CycleGAN architecture to our proposed extension ( DetCycleGAN ), mean precision (PPV) improved by <inline-formula><tex-math notation="LaTeX">+61.32</tex-math></inline-formula>, mean sensitivity (TPR) by <inline-formula><tex-math notation="LaTeX">+37.91</tex-math></inline-formula>, and mean <inline-formula><tex-math notation="LaTeX">F_1</tex-math></inline-formula> score by <inline-formula><tex-math notation="LaTeX">+0.4743</tex-math></inline-formula>. Furthermore, it could be shown that by dataset fusion, generated intra-operative images can be leveraged as additional training data for the detection network itself.]]></description><subject>Augmented reality</subject><subject>Contouring</subject><subject>CycleGAN</subject><subject>Domains</subject><subject>Endoscopy</subject><subject>Generative adversarial networks</subject><subject>Humans</subject><subject>Image Processing, Computer-Assisted - methods</subject><subject>Image segmentation</subject><subject>landmark detection</subject><subject>landmark localization</subject><subject>Maintenance engineering</subject><subject>mitral valve repair</subject><subject>Optimization</subject><subject>Phantoms, Imaging</subject><subject>Semantics</subject><subject>Surgery</subject><subject>surgical simulation</subject><subject>surgical training</subject><subject>Sutures</subject><subject>Synthesis</subject><subject>Task analysis</subject><subject>Training</subject><subject>Translation</subject><subject>Valves</subject><issn>2168-2194</issn><issn>2168-2208</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpdkctOwzAQRS0EohX0AxASisSGTYrfjZe8KSpiAawt15lASuqUOEHq3zOljwVe2KPxuaOZuYScMDpkjJrLp-vH8ZBTzoaCGpOpbI_0OdNZyjnN9rcxM7JHBjHOKJ4MU0Yfkp6QglEhVJ_Mnru2c1W1TMbzRVP_QJ7chbyOvl6UHnPuA5LXZWg_IZYxcSFPJnjNXfOV3EILvi3rkJQheQ8LVzao_pOkbZ2utW-NC7FyK-yYHBSuijDYvEfk_f7u7eYxnbw8jG-uJqkX0rSpVJJT5fxUMyU4cDfSvuA8L-TUY__MeaqVkJQVTAtQCgrGpNGFkRhNxUgckYt1XZznu4PY2nkZPVSVC1B30XKllEZ9xhE9_4fO6q4J2J3lmjOqNe4XKbamfFPH2EBhF02JK1haRu3KC7vywq68sBsvUHO2qdxN55DvFNvNI3C6BkoA2H0bnAQtE781p4wK</recordid><startdate>202201</startdate><enddate>202201</enddate><creator>Sharan, Lalith</creator><creator>Romano, Gabriele</creator><creator>Koehler, Sven</creator><creator>Kelm, Halvar</creator><creator>Karck, Matthias</creator><creator>De Simone, Raffaele</creator><creator>Engelhardt, Sandy</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>K9.</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-4989-8766</orcidid><orcidid>https://orcid.org/0000-0003-0835-042X</orcidid><orcidid>https://orcid.org/0000-0001-8816-7654</orcidid></search><sort><creationdate>202201</creationdate><title>Mutually Improved Endoscopic Image Synthesis and Landmark Detection in Unpaired Image-to-Image Translation</title><author>Sharan, Lalith ; Romano, Gabriele ; Koehler, Sven ; Kelm, Halvar ; Karck, Matthias ; De Simone, Raffaele ; Engelhardt, Sandy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c349t-454205acb61532e2a76cf22df4bc1681ac0653401f163e55ef11496f94ef1b373</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Augmented reality</topic><topic>Contouring</topic><topic>CycleGAN</topic><topic>Domains</topic><topic>Endoscopy</topic><topic>Generative adversarial networks</topic><topic>Humans</topic><topic>Image Processing, Computer-Assisted - methods</topic><topic>Image segmentation</topic><topic>landmark detection</topic><topic>landmark localization</topic><topic>Maintenance engineering</topic><topic>mitral valve repair</topic><topic>Optimization</topic><topic>Phantoms, Imaging</topic><topic>Semantics</topic><topic>Surgery</topic><topic>surgical simulation</topic><topic>surgical training</topic><topic>Sutures</topic><topic>Synthesis</topic><topic>Task analysis</topic><topic>Training</topic><topic>Translation</topic><topic>Valves</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sharan, Lalith</creatorcontrib><creatorcontrib>Romano, Gabriele</creatorcontrib><creatorcontrib>Koehler, Sven</creatorcontrib><creatorcontrib>Kelm, Halvar</creatorcontrib><creatorcontrib>Karck, Matthias</creatorcontrib><creatorcontrib>De Simone, Raffaele</creatorcontrib><creatorcontrib>Engelhardt, Sandy</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing & Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE journal of biomedical and health informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sharan, Lalith</au><au>Romano, Gabriele</au><au>Koehler, Sven</au><au>Kelm, Halvar</au><au>Karck, Matthias</au><au>De Simone, Raffaele</au><au>Engelhardt, Sandy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mutually Improved Endoscopic Image Synthesis and Landmark Detection in Unpaired Image-to-Image Translation</atitle><jtitle>IEEE journal of biomedical and health informatics</jtitle><stitle>JBHI</stitle><addtitle>IEEE J Biomed Health Inform</addtitle><date>2022-01</date><risdate>2022</risdate><volume>26</volume><issue>1</issue><spage>127</spage><epage>138</epage><pages>127-138</pages><issn>2168-2194</issn><eissn>2168-2208</eissn><coden>IJBHA9</coden><abstract><![CDATA[The CycleGAN framework allows for unsupervised image-to-image translation of unpaired data. In a scenario of surgical training on a physical surgical simulator, this method can be used to transform endoscopic images of phantoms into images which more closely resemble the intra-operative appearance of the same surgical target structure. This can be viewed as a novel augmented reality approach, which we coined Hyperrealism in previous work. In this use case, it is of paramount importance to display objects like needles, sutures or instruments consistent in both domains while altering the style to a more tissue-like appearance. Segmentation of these objects would allow for a direct transfer, however, contouring of these, partly tiny and thin foreground objects is cumbersome and perhaps inaccurate. Instead, we propose to use landmark detection on the points when sutures pass into the tissue. This objective is directly incorporated into a CycleGAN framework by treating the performance of pre-trained detector models as an additional optimization goal. We show that a task defined on these sparse landmark labels improves consistency of synthesis by the generator network in both domains. Comparing a baseline CycleGAN architecture to our proposed extension ( DetCycleGAN ), mean precision (PPV) improved by <inline-formula><tex-math notation="LaTeX">+61.32</tex-math></inline-formula>, mean sensitivity (TPR) by <inline-formula><tex-math notation="LaTeX">+37.91</tex-math></inline-formula>, and mean <inline-formula><tex-math notation="LaTeX">F_1</tex-math></inline-formula> score by <inline-formula><tex-math notation="LaTeX">+0.4743</tex-math></inline-formula>. Furthermore, it could be shown that by dataset fusion, generated intra-operative images can be leveraged as additional training data for the detection network itself.]]></abstract><cop>United States</cop><pub>IEEE</pub><pmid>34310335</pmid><doi>10.1109/JBHI.2021.3099858</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-4989-8766</orcidid><orcidid>https://orcid.org/0000-0003-0835-042X</orcidid><orcidid>https://orcid.org/0000-0001-8816-7654</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2168-2194 |
ispartof | IEEE journal of biomedical and health informatics, 2022-01, Vol.26 (1), p.127-138 |
issn | 2168-2194 2168-2208 |
language | eng |
recordid | cdi_pubmed_primary_34310335 |
source | IEEE Electronic Library (IEL) |
subjects | Augmented reality Contouring CycleGAN Domains Endoscopy Generative adversarial networks Humans Image Processing, Computer-Assisted - methods Image segmentation landmark detection landmark localization Maintenance engineering mitral valve repair Optimization Phantoms, Imaging Semantics Surgery surgical simulation surgical training Sutures Synthesis Task analysis Training Translation Valves |
title | Mutually Improved Endoscopic Image Synthesis and Landmark Detection in Unpaired Image-to-Image Translation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T21%3A37%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mutually%20Improved%20Endoscopic%20Image%20Synthesis%20and%20Landmark%20Detection%20in%20Unpaired%20Image-to-Image%20Translation&rft.jtitle=IEEE%20journal%20of%20biomedical%20and%20health%20informatics&rft.au=Sharan,%20Lalith&rft.date=2022-01&rft.volume=26&rft.issue=1&rft.spage=127&rft.epage=138&rft.pages=127-138&rft.issn=2168-2194&rft.eissn=2168-2208&rft.coden=IJBHA9&rft_id=info:doi/10.1109/JBHI.2021.3099858&rft_dat=%3Cproquest_RIE%3E2621066021%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2621066021&rft_id=info:pmid/34310335&rft_ieee_id=9496194&rfr_iscdi=true |