EndoL2H: Deep Super-Resolution for Capsule Endoscopy
Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics. Enhanced-resolution endoscopy has shown to improve adenoma detection rate for conv...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on medical imaging 2020-12, Vol.39 (12), p.4297-4309 |
---|---|
Hauptverfasser: | , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 4309 |
---|---|
container_issue | 12 |
container_start_page | 4297 |
container_title | IEEE transactions on medical imaging |
container_volume | 39 |
creator | Almalioglu, Yasin Bengisu Ozyoruk, Kutsev Gokce, Abdulkadir Incetan, Kagan Irem Gokceler, Guliz Ali Simsek, Muhammed Ararat, Kivanc Chen, Richard J. Durr, Nicholas J. Mahmood, Faisal Turan, Mehmet |
description | Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics. Enhanced-resolution endoscopy has shown to improve adenoma detection rate for conventional endoscopy and is likely to do the same for capsule endoscopy. In this work, we propose and quantitatively validate a novel framework to learn a mapping from low-to-high-resolution endoscopic images. We combine conditional adversarial networks with a spatial attention block to improve the resolution by up to factors of 8\times , 10\times , 12\times , respectively. Quantitative and qualitative studies demonstrate the superiority of EndoL2H over state-of-the-art deep super-resolution methods Deep Back-Projection Networks (DBPN), Deep Residual Channel Attention Networks (RCAN) and Super Resolution Generative Adversarial Network (SRGAN). Mean Opinion Score (MOS) tests were performed by 30 gastroenterologists qualitatively assess and confirm the clinical relevance of the approach. EndoL2H is generally applicable to any endoscopic capsule system and has the potential to improve diagnosis and better harness computational approaches for polyp detection and characterization. Our code and trained models are available at https://github.com/CapsuleEndoscope/EndoL2H . |
doi_str_mv | 10.1109/TMI.2020.3016744 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_32795966</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9167261</ieee_id><sourcerecordid>2434470719</sourcerecordid><originalsourceid>FETCH-LOGICAL-c347t-a8758ecde99eb7d11c705b365b0fa1297ae02732f34db0c0bef0b87e152fe32d3</originalsourceid><addsrcrecordid>eNpdkEtLw0AUhQdRbK3uBUECbtyk3nllMu6kVluoCFrB3ZDHDaSkmTjTLPrvndLahau7uN85HD5CrimMKQX9sHybjxkwGHOgiRLihAyplGnMpPg-JUNgKo0BEjYgF96vAKiQoM_JgDOlpU6SIRHTtrQLNnuMnhG76LPv0MUf6G3Tb2rbRpV10STrfN9gtEN9YbvtJTmrssbj1eGOyNfLdDmZxYv31_nkaREXXKhNnKVKpliUqDXmqqS0UCBznsgcqowyrTIMCzmruChzKCDHCvJUIZWsQs5KPiL3-97O2Z8e_casa19g02Qt2t4bJrgQChTVAb37h65s79qwLlCJYlpzrgIFe6pw1nuHlelcvc7c1lAwO6MmGDU7o-ZgNERuD8V9vsbyGPhTGICbPVAj4vGtQ5ollP8C7KN3tQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2467299337</pqid></control><display><type>article</type><title>EndoL2H: Deep Super-Resolution for Capsule Endoscopy</title><source>IEEE Electronic Library (IEL)</source><creator>Almalioglu, Yasin ; Bengisu Ozyoruk, Kutsev ; Gokce, Abdulkadir ; Incetan, Kagan ; Irem Gokceler, Guliz ; Ali Simsek, Muhammed ; Ararat, Kivanc ; Chen, Richard J. ; Durr, Nicholas J. ; Mahmood, Faisal ; Turan, Mehmet</creator><creatorcontrib>Almalioglu, Yasin ; Bengisu Ozyoruk, Kutsev ; Gokce, Abdulkadir ; Incetan, Kagan ; Irem Gokceler, Guliz ; Ali Simsek, Muhammed ; Ararat, Kivanc ; Chen, Richard J. ; Durr, Nicholas J. ; Mahmood, Faisal ; Turan, Mehmet</creatorcontrib><description><![CDATA[Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics. Enhanced-resolution endoscopy has shown to improve adenoma detection rate for conventional endoscopy and is likely to do the same for capsule endoscopy. In this work, we propose and quantitatively validate a novel framework to learn a mapping from low-to-high-resolution endoscopic images. We combine conditional adversarial networks with a spatial attention block to improve the resolution by up to factors of <inline-formula> <tex-math notation="LaTeX">8\times </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">10\times </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">12\times </tex-math></inline-formula>, respectively. Quantitative and qualitative studies demonstrate the superiority of EndoL2H over state-of-the-art deep super-resolution methods Deep Back-Projection Networks (DBPN), Deep Residual Channel Attention Networks (RCAN) and Super Resolution Generative Adversarial Network (SRGAN). Mean Opinion Score (MOS) tests were performed by 30 gastroenterologists qualitatively assess and confirm the clinical relevance of the approach. EndoL2H is generally applicable to any endoscopic capsule system and has the potential to improve diagnosis and better harness computational approaches for polyp detection and characterization. Our code and trained models are available at https://github.com/CapsuleEndoscope/EndoL2H .]]></description><identifier>ISSN: 0278-0062</identifier><identifier>EISSN: 1558-254X</identifier><identifier>DOI: 10.1109/TMI.2020.3016744</identifier><identifier>PMID: 32795966</identifier><identifier>CODEN: ITMID4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Adenoma ; Cameras ; Capsule endoscopy ; Computer applications ; conditional generative adversarial network ; Degradation ; Diagnosis ; Endoscopes ; Endoscopy ; Generative adversarial networks ; Generators ; Image resolution ; Networks ; Small intestine ; spatial attention network ; Spatial resolution ; super-resolution</subject><ispartof>IEEE transactions on medical imaging, 2020-12, Vol.39 (12), p.4297-4309</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c347t-a8758ecde99eb7d11c705b365b0fa1297ae02732f34db0c0bef0b87e152fe32d3</citedby><cites>FETCH-LOGICAL-c347t-a8758ecde99eb7d11c705b365b0fa1297ae02732f34db0c0bef0b87e152fe32d3</cites><orcidid>0000-0002-9251-7853 ; 0000-0002-0421-0393 ; 0000-0001-5114-8142 ; 0000-0002-0913-2531 ; 0000-0001-5943-7440 ; 0000-0001-9808-7383 ; 0000-0001-7587-1562 ; 0000-0001-6559-3423</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9167261$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9167261$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32795966$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Almalioglu, Yasin</creatorcontrib><creatorcontrib>Bengisu Ozyoruk, Kutsev</creatorcontrib><creatorcontrib>Gokce, Abdulkadir</creatorcontrib><creatorcontrib>Incetan, Kagan</creatorcontrib><creatorcontrib>Irem Gokceler, Guliz</creatorcontrib><creatorcontrib>Ali Simsek, Muhammed</creatorcontrib><creatorcontrib>Ararat, Kivanc</creatorcontrib><creatorcontrib>Chen, Richard J.</creatorcontrib><creatorcontrib>Durr, Nicholas J.</creatorcontrib><creatorcontrib>Mahmood, Faisal</creatorcontrib><creatorcontrib>Turan, Mehmet</creatorcontrib><title>EndoL2H: Deep Super-Resolution for Capsule Endoscopy</title><title>IEEE transactions on medical imaging</title><addtitle>TMI</addtitle><addtitle>IEEE Trans Med Imaging</addtitle><description><![CDATA[Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics. Enhanced-resolution endoscopy has shown to improve adenoma detection rate for conventional endoscopy and is likely to do the same for capsule endoscopy. In this work, we propose and quantitatively validate a novel framework to learn a mapping from low-to-high-resolution endoscopic images. We combine conditional adversarial networks with a spatial attention block to improve the resolution by up to factors of <inline-formula> <tex-math notation="LaTeX">8\times </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">10\times </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">12\times </tex-math></inline-formula>, respectively. Quantitative and qualitative studies demonstrate the superiority of EndoL2H over state-of-the-art deep super-resolution methods Deep Back-Projection Networks (DBPN), Deep Residual Channel Attention Networks (RCAN) and Super Resolution Generative Adversarial Network (SRGAN). Mean Opinion Score (MOS) tests were performed by 30 gastroenterologists qualitatively assess and confirm the clinical relevance of the approach. EndoL2H is generally applicable to any endoscopic capsule system and has the potential to improve diagnosis and better harness computational approaches for polyp detection and characterization. Our code and trained models are available at https://github.com/CapsuleEndoscope/EndoL2H .]]></description><subject>Adenoma</subject><subject>Cameras</subject><subject>Capsule endoscopy</subject><subject>Computer applications</subject><subject>conditional generative adversarial network</subject><subject>Degradation</subject><subject>Diagnosis</subject><subject>Endoscopes</subject><subject>Endoscopy</subject><subject>Generative adversarial networks</subject><subject>Generators</subject><subject>Image resolution</subject><subject>Networks</subject><subject>Small intestine</subject><subject>spatial attention network</subject><subject>Spatial resolution</subject><subject>super-resolution</subject><issn>0278-0062</issn><issn>1558-254X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkEtLw0AUhQdRbK3uBUECbtyk3nllMu6kVluoCFrB3ZDHDaSkmTjTLPrvndLahau7uN85HD5CrimMKQX9sHybjxkwGHOgiRLihAyplGnMpPg-JUNgKo0BEjYgF96vAKiQoM_JgDOlpU6SIRHTtrQLNnuMnhG76LPv0MUf6G3Tb2rbRpV10STrfN9gtEN9YbvtJTmrssbj1eGOyNfLdDmZxYv31_nkaREXXKhNnKVKpliUqDXmqqS0UCBznsgcqowyrTIMCzmruChzKCDHCvJUIZWsQs5KPiL3-97O2Z8e_casa19g02Qt2t4bJrgQChTVAb37h65s79qwLlCJYlpzrgIFe6pw1nuHlelcvc7c1lAwO6MmGDU7o-ZgNERuD8V9vsbyGPhTGICbPVAj4vGtQ5ollP8C7KN3tQ</recordid><startdate>20201201</startdate><enddate>20201201</enddate><creator>Almalioglu, Yasin</creator><creator>Bengisu Ozyoruk, Kutsev</creator><creator>Gokce, Abdulkadir</creator><creator>Incetan, Kagan</creator><creator>Irem Gokceler, Guliz</creator><creator>Ali Simsek, Muhammed</creator><creator>Ararat, Kivanc</creator><creator>Chen, Richard J.</creator><creator>Durr, Nicholas J.</creator><creator>Mahmood, Faisal</creator><creator>Turan, Mehmet</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-9251-7853</orcidid><orcidid>https://orcid.org/0000-0002-0421-0393</orcidid><orcidid>https://orcid.org/0000-0001-5114-8142</orcidid><orcidid>https://orcid.org/0000-0002-0913-2531</orcidid><orcidid>https://orcid.org/0000-0001-5943-7440</orcidid><orcidid>https://orcid.org/0000-0001-9808-7383</orcidid><orcidid>https://orcid.org/0000-0001-7587-1562</orcidid><orcidid>https://orcid.org/0000-0001-6559-3423</orcidid></search><sort><creationdate>20201201</creationdate><title>EndoL2H: Deep Super-Resolution for Capsule Endoscopy</title><author>Almalioglu, Yasin ; Bengisu Ozyoruk, Kutsev ; Gokce, Abdulkadir ; Incetan, Kagan ; Irem Gokceler, Guliz ; Ali Simsek, Muhammed ; Ararat, Kivanc ; Chen, Richard J. ; Durr, Nicholas J. ; Mahmood, Faisal ; Turan, Mehmet</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c347t-a8758ecde99eb7d11c705b365b0fa1297ae02732f34db0c0bef0b87e152fe32d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Adenoma</topic><topic>Cameras</topic><topic>Capsule endoscopy</topic><topic>Computer applications</topic><topic>conditional generative adversarial network</topic><topic>Degradation</topic><topic>Diagnosis</topic><topic>Endoscopes</topic><topic>Endoscopy</topic><topic>Generative adversarial networks</topic><topic>Generators</topic><topic>Image resolution</topic><topic>Networks</topic><topic>Small intestine</topic><topic>spatial attention network</topic><topic>Spatial resolution</topic><topic>super-resolution</topic><toplevel>online_resources</toplevel><creatorcontrib>Almalioglu, Yasin</creatorcontrib><creatorcontrib>Bengisu Ozyoruk, Kutsev</creatorcontrib><creatorcontrib>Gokce, Abdulkadir</creatorcontrib><creatorcontrib>Incetan, Kagan</creatorcontrib><creatorcontrib>Irem Gokceler, Guliz</creatorcontrib><creatorcontrib>Ali Simsek, Muhammed</creatorcontrib><creatorcontrib>Ararat, Kivanc</creatorcontrib><creatorcontrib>Chen, Richard J.</creatorcontrib><creatorcontrib>Durr, Nicholas J.</creatorcontrib><creatorcontrib>Mahmood, Faisal</creatorcontrib><creatorcontrib>Turan, Mehmet</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing & Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on medical imaging</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Almalioglu, Yasin</au><au>Bengisu Ozyoruk, Kutsev</au><au>Gokce, Abdulkadir</au><au>Incetan, Kagan</au><au>Irem Gokceler, Guliz</au><au>Ali Simsek, Muhammed</au><au>Ararat, Kivanc</au><au>Chen, Richard J.</au><au>Durr, Nicholas J.</au><au>Mahmood, Faisal</au><au>Turan, Mehmet</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>EndoL2H: Deep Super-Resolution for Capsule Endoscopy</atitle><jtitle>IEEE transactions on medical imaging</jtitle><stitle>TMI</stitle><addtitle>IEEE Trans Med Imaging</addtitle><date>2020-12-01</date><risdate>2020</risdate><volume>39</volume><issue>12</issue><spage>4297</spage><epage>4309</epage><pages>4297-4309</pages><issn>0278-0062</issn><eissn>1558-254X</eissn><coden>ITMID4</coden><abstract><![CDATA[Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics. Enhanced-resolution endoscopy has shown to improve adenoma detection rate for conventional endoscopy and is likely to do the same for capsule endoscopy. In this work, we propose and quantitatively validate a novel framework to learn a mapping from low-to-high-resolution endoscopic images. We combine conditional adversarial networks with a spatial attention block to improve the resolution by up to factors of <inline-formula> <tex-math notation="LaTeX">8\times </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">10\times </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">12\times </tex-math></inline-formula>, respectively. Quantitative and qualitative studies demonstrate the superiority of EndoL2H over state-of-the-art deep super-resolution methods Deep Back-Projection Networks (DBPN), Deep Residual Channel Attention Networks (RCAN) and Super Resolution Generative Adversarial Network (SRGAN). Mean Opinion Score (MOS) tests were performed by 30 gastroenterologists qualitatively assess and confirm the clinical relevance of the approach. EndoL2H is generally applicable to any endoscopic capsule system and has the potential to improve diagnosis and better harness computational approaches for polyp detection and characterization. Our code and trained models are available at https://github.com/CapsuleEndoscope/EndoL2H .]]></abstract><cop>United States</cop><pub>IEEE</pub><pmid>32795966</pmid><doi>10.1109/TMI.2020.3016744</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-9251-7853</orcidid><orcidid>https://orcid.org/0000-0002-0421-0393</orcidid><orcidid>https://orcid.org/0000-0001-5114-8142</orcidid><orcidid>https://orcid.org/0000-0002-0913-2531</orcidid><orcidid>https://orcid.org/0000-0001-5943-7440</orcidid><orcidid>https://orcid.org/0000-0001-9808-7383</orcidid><orcidid>https://orcid.org/0000-0001-7587-1562</orcidid><orcidid>https://orcid.org/0000-0001-6559-3423</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0278-0062 |
ispartof | IEEE transactions on medical imaging, 2020-12, Vol.39 (12), p.4297-4309 |
issn | 0278-0062 1558-254X |
language | eng |
recordid | cdi_pubmed_primary_32795966 |
source | IEEE Electronic Library (IEL) |
subjects | Adenoma Cameras Capsule endoscopy Computer applications conditional generative adversarial network Degradation Diagnosis Endoscopes Endoscopy Generative adversarial networks Generators Image resolution Networks Small intestine spatial attention network Spatial resolution super-resolution |
title | EndoL2H: Deep Super-Resolution for Capsule Endoscopy |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T17%3A28%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=EndoL2H:%20Deep%20Super-Resolution%20for%20Capsule%20Endoscopy&rft.jtitle=IEEE%20transactions%20on%20medical%20imaging&rft.au=Almalioglu,%20Yasin&rft.date=2020-12-01&rft.volume=39&rft.issue=12&rft.spage=4297&rft.epage=4309&rft.pages=4297-4309&rft.issn=0278-0062&rft.eissn=1558-254X&rft.coden=ITMID4&rft_id=info:doi/10.1109/TMI.2020.3016744&rft_dat=%3Cproquest_RIE%3E2434470719%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2467299337&rft_id=info:pmid/32795966&rft_ieee_id=9167261&rfr_iscdi=true |