Robust deep learning-based semantic organ segmentation in hyperspectral images
•First study of optimal input modality and spatial granularity for organ segmentation.•Validation set of unprecedented size featuring 506 images annotated with 19 classes.•Deep learning-based hyperspectral image segmentation reaches inter-rater performance.•Benefit of HSI-based segmentation over RGB...
Gespeichert in:
Veröffentlicht in: | Medical image analysis 2022-08, Vol.80, p.102488-102488, Article 102488 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 102488 |
---|---|
container_issue | |
container_start_page | 102488 |
container_title | Medical image analysis |
container_volume | 80 |
creator | Seidlitz, Silvia Sellner, Jan Odenthal, Jan Özdemir, Berkin Studier-Fischer, Alexander Knödler, Samuel Ayala, Leonardo Adler, Tim J. Kenngott, Hannes G. Tizabi, Minu Wagner, Martin Nickel, Felix Müller-Stich, Beat P. Maier-Hein, Lena |
description | •First study of optimal input modality and spatial granularity for organ segmentation.•Validation set of unprecedented size featuring 506 images annotated with 19 classes.•Deep learning-based hyperspectral image segmentation reaches inter-rater performance.•Benefit of HSI-based segmentation over RGB & processed HSI higher with smaller input.•Organ segmentation improves with increased spatial context regardless of modality.
[Display omitted]
Semantic image segmentation is an important prerequisite for context-awareness and autonomous robotics in surgery. The state of the art has focused on conventional RGB video data acquired during minimally invasive surgery, but full-scene semantic segmentation based on spectral imaging data and obtained during open surgery has received almost no attention to date. To address this gap in the literature, we are investigating the following research questions based on hyperspectral imaging (HSI) data of pigs acquired in an open surgery setting: (1) What is an adequate representation of HSI data for neural network-based fully automated organ segmentation, especially with respect to the spatial granularity of the data (pixels vs. superpixels vs. patches vs. full images)? (2) Is there a benefit of using HSI data compared to other modalities, namely RGB data and processed HSI data (e.g. tissue parameters like oxygenation), when performing semantic organ segmentation? According to a comprehensive validation study based on 506 HSI images from 20 pigs, annotated with a total of 19 classes, deep learning-based segmentation performance increases — consistently across modalities — with the spatial context of the input data. Unprocessed HSI data offers an advantage over RGB data or processed data from the camera provider, with the advantage increasing with decreasing size of the input to the neural network. Maximum performance (HSI applied to whole images) yielded a mean DSC of 0.90 ((standard deviation (SD)) 0.04), which is in the range of the inter-rater variability (DSC of 0.89 ((standard deviation (SD)) 0.07)). We conclude that HSI could become a powerful image modality for fully-automatic surgical scene understanding with many advantages over traditional imaging, including the ability to recover additional functional tissue information. Our code and pre-trained models are available at https://github.com/IMSY-DKFZ/htc. |
doi_str_mv | 10.1016/j.media.2022.102488 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2674002663</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S1361841522001359</els_id><sourcerecordid>2674002663</sourcerecordid><originalsourceid>FETCH-LOGICAL-c404t-ee57929577041dda526c9f133f51852e99cbf1fbd66b9c94b61b032ac6e222763</originalsourceid><addsrcrecordid>eNp9kMlOwzAQhi0EoqXwBEgoRy4p3uIkBw6oYpMqkBCcLceZBFeJE-wEqW-PS0qPnGb7Z_sQuiR4STARN5tlC6VRS4opDRnKs-wIzQkTJM44ZccHnyQzdOb9BmOcco5P0YwlQqSMpnP08tYVox-iEqCPGlDOGlvHhfJQRh5aZQejo87VyoawbsEOajCdjYyNPrc9ON-DHpxqItOqGvw5OqlU4-Fibxfo4-H-ffUUr18fn1d361hzzIcYIElzmidpijkpS5VQofOKMFYlJEso5LkuKlIVpRBFrnNeCFJgRpUWQClNBVug62lu77qvEfwgW-M1NI2y0I1eUpFyjKkQLEjZJNWu895BJXsXjnVbSbDcgZQb-QtS7kDKCWToutovGItQPfT8kQuC20kA4c1vA056bcDqMMkFJLLszL8LfgB1ioUB</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2674002663</pqid></control><display><type>article</type><title>Robust deep learning-based semantic organ segmentation in hyperspectral images</title><source>Access via ScienceDirect (Elsevier)</source><creator>Seidlitz, Silvia ; Sellner, Jan ; Odenthal, Jan ; Özdemir, Berkin ; Studier-Fischer, Alexander ; Knödler, Samuel ; Ayala, Leonardo ; Adler, Tim J. ; Kenngott, Hannes G. ; Tizabi, Minu ; Wagner, Martin ; Nickel, Felix ; Müller-Stich, Beat P. ; Maier-Hein, Lena</creator><creatorcontrib>Seidlitz, Silvia ; Sellner, Jan ; Odenthal, Jan ; Özdemir, Berkin ; Studier-Fischer, Alexander ; Knödler, Samuel ; Ayala, Leonardo ; Adler, Tim J. ; Kenngott, Hannes G. ; Tizabi, Minu ; Wagner, Martin ; Nickel, Felix ; Müller-Stich, Beat P. ; Maier-Hein, Lena</creatorcontrib><description>•First study of optimal input modality and spatial granularity for organ segmentation.•Validation set of unprecedented size featuring 506 images annotated with 19 classes.•Deep learning-based hyperspectral image segmentation reaches inter-rater performance.•Benefit of HSI-based segmentation over RGB & processed HSI higher with smaller input.•Organ segmentation improves with increased spatial context regardless of modality.
[Display omitted]
Semantic image segmentation is an important prerequisite for context-awareness and autonomous robotics in surgery. The state of the art has focused on conventional RGB video data acquired during minimally invasive surgery, but full-scene semantic segmentation based on spectral imaging data and obtained during open surgery has received almost no attention to date. To address this gap in the literature, we are investigating the following research questions based on hyperspectral imaging (HSI) data of pigs acquired in an open surgery setting: (1) What is an adequate representation of HSI data for neural network-based fully automated organ segmentation, especially with respect to the spatial granularity of the data (pixels vs. superpixels vs. patches vs. full images)? (2) Is there a benefit of using HSI data compared to other modalities, namely RGB data and processed HSI data (e.g. tissue parameters like oxygenation), when performing semantic organ segmentation? According to a comprehensive validation study based on 506 HSI images from 20 pigs, annotated with a total of 19 classes, deep learning-based segmentation performance increases — consistently across modalities — with the spatial context of the input data. Unprocessed HSI data offers an advantage over RGB data or processed data from the camera provider, with the advantage increasing with decreasing size of the input to the neural network. Maximum performance (HSI applied to whole images) yielded a mean DSC of 0.90 ((standard deviation (SD)) 0.04), which is in the range of the inter-rater variability (DSC of 0.89 ((standard deviation (SD)) 0.07)). We conclude that HSI could become a powerful image modality for fully-automatic surgical scene understanding with many advantages over traditional imaging, including the ability to recover additional functional tissue information. Our code and pre-trained models are available at https://github.com/IMSY-DKFZ/htc.</description><identifier>ISSN: 1361-8415</identifier><identifier>EISSN: 1361-8423</identifier><identifier>DOI: 10.1016/j.media.2022.102488</identifier><identifier>PMID: 35667327</identifier><language>eng</language><publisher>Netherlands: Elsevier B.V</publisher><subject>Deep learning ; Hyperspectral imaging ; Open surgery ; Organ segmentation ; Semantic scene segmentation ; Surgical data science</subject><ispartof>Medical image analysis, 2022-08, Vol.80, p.102488-102488, Article 102488</ispartof><rights>2022 The Authors</rights><rights>Copyright © 2022 The Authors. Published by Elsevier B.V. All rights reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c404t-ee57929577041dda526c9f133f51852e99cbf1fbd66b9c94b61b032ac6e222763</citedby><cites>FETCH-LOGICAL-c404t-ee57929577041dda526c9f133f51852e99cbf1fbd66b9c94b61b032ac6e222763</cites><orcidid>0000-0003-4469-8343 ; 0000-0002-1122-4793 ; 0000-0001-8927-4830 ; 0000-0001-8682-9300 ; 0000-0002-3574-2085 ; 0000-0002-9831-9110</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.media.2022.102488$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35667327$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Seidlitz, Silvia</creatorcontrib><creatorcontrib>Sellner, Jan</creatorcontrib><creatorcontrib>Odenthal, Jan</creatorcontrib><creatorcontrib>Özdemir, Berkin</creatorcontrib><creatorcontrib>Studier-Fischer, Alexander</creatorcontrib><creatorcontrib>Knödler, Samuel</creatorcontrib><creatorcontrib>Ayala, Leonardo</creatorcontrib><creatorcontrib>Adler, Tim J.</creatorcontrib><creatorcontrib>Kenngott, Hannes G.</creatorcontrib><creatorcontrib>Tizabi, Minu</creatorcontrib><creatorcontrib>Wagner, Martin</creatorcontrib><creatorcontrib>Nickel, Felix</creatorcontrib><creatorcontrib>Müller-Stich, Beat P.</creatorcontrib><creatorcontrib>Maier-Hein, Lena</creatorcontrib><title>Robust deep learning-based semantic organ segmentation in hyperspectral images</title><title>Medical image analysis</title><addtitle>Med Image Anal</addtitle><description>•First study of optimal input modality and spatial granularity for organ segmentation.•Validation set of unprecedented size featuring 506 images annotated with 19 classes.•Deep learning-based hyperspectral image segmentation reaches inter-rater performance.•Benefit of HSI-based segmentation over RGB & processed HSI higher with smaller input.•Organ segmentation improves with increased spatial context regardless of modality.
[Display omitted]
Semantic image segmentation is an important prerequisite for context-awareness and autonomous robotics in surgery. The state of the art has focused on conventional RGB video data acquired during minimally invasive surgery, but full-scene semantic segmentation based on spectral imaging data and obtained during open surgery has received almost no attention to date. To address this gap in the literature, we are investigating the following research questions based on hyperspectral imaging (HSI) data of pigs acquired in an open surgery setting: (1) What is an adequate representation of HSI data for neural network-based fully automated organ segmentation, especially with respect to the spatial granularity of the data (pixels vs. superpixels vs. patches vs. full images)? (2) Is there a benefit of using HSI data compared to other modalities, namely RGB data and processed HSI data (e.g. tissue parameters like oxygenation), when performing semantic organ segmentation? According to a comprehensive validation study based on 506 HSI images from 20 pigs, annotated with a total of 19 classes, deep learning-based segmentation performance increases — consistently across modalities — with the spatial context of the input data. Unprocessed HSI data offers an advantage over RGB data or processed data from the camera provider, with the advantage increasing with decreasing size of the input to the neural network. Maximum performance (HSI applied to whole images) yielded a mean DSC of 0.90 ((standard deviation (SD)) 0.04), which is in the range of the inter-rater variability (DSC of 0.89 ((standard deviation (SD)) 0.07)). We conclude that HSI could become a powerful image modality for fully-automatic surgical scene understanding with many advantages over traditional imaging, including the ability to recover additional functional tissue information. Our code and pre-trained models are available at https://github.com/IMSY-DKFZ/htc.</description><subject>Deep learning</subject><subject>Hyperspectral imaging</subject><subject>Open surgery</subject><subject>Organ segmentation</subject><subject>Semantic scene segmentation</subject><subject>Surgical data science</subject><issn>1361-8415</issn><issn>1361-8423</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kMlOwzAQhi0EoqXwBEgoRy4p3uIkBw6oYpMqkBCcLceZBFeJE-wEqW-PS0qPnGb7Z_sQuiR4STARN5tlC6VRS4opDRnKs-wIzQkTJM44ZccHnyQzdOb9BmOcco5P0YwlQqSMpnP08tYVox-iEqCPGlDOGlvHhfJQRh5aZQejo87VyoawbsEOajCdjYyNPrc9ON-DHpxqItOqGvw5OqlU4-Fibxfo4-H-ffUUr18fn1d361hzzIcYIElzmidpijkpS5VQofOKMFYlJEso5LkuKlIVpRBFrnNeCFJgRpUWQClNBVug62lu77qvEfwgW-M1NI2y0I1eUpFyjKkQLEjZJNWu895BJXsXjnVbSbDcgZQb-QtS7kDKCWToutovGItQPfT8kQuC20kA4c1vA056bcDqMMkFJLLszL8LfgB1ioUB</recordid><startdate>20220801</startdate><enddate>20220801</enddate><creator>Seidlitz, Silvia</creator><creator>Sellner, Jan</creator><creator>Odenthal, Jan</creator><creator>Özdemir, Berkin</creator><creator>Studier-Fischer, Alexander</creator><creator>Knödler, Samuel</creator><creator>Ayala, Leonardo</creator><creator>Adler, Tim J.</creator><creator>Kenngott, Hannes G.</creator><creator>Tizabi, Minu</creator><creator>Wagner, Martin</creator><creator>Nickel, Felix</creator><creator>Müller-Stich, Beat P.</creator><creator>Maier-Hein, Lena</creator><general>Elsevier B.V</general><scope>6I.</scope><scope>AAFTH</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-4469-8343</orcidid><orcidid>https://orcid.org/0000-0002-1122-4793</orcidid><orcidid>https://orcid.org/0000-0001-8927-4830</orcidid><orcidid>https://orcid.org/0000-0001-8682-9300</orcidid><orcidid>https://orcid.org/0000-0002-3574-2085</orcidid><orcidid>https://orcid.org/0000-0002-9831-9110</orcidid></search><sort><creationdate>20220801</creationdate><title>Robust deep learning-based semantic organ segmentation in hyperspectral images</title><author>Seidlitz, Silvia ; Sellner, Jan ; Odenthal, Jan ; Özdemir, Berkin ; Studier-Fischer, Alexander ; Knödler, Samuel ; Ayala, Leonardo ; Adler, Tim J. ; Kenngott, Hannes G. ; Tizabi, Minu ; Wagner, Martin ; Nickel, Felix ; Müller-Stich, Beat P. ; Maier-Hein, Lena</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c404t-ee57929577041dda526c9f133f51852e99cbf1fbd66b9c94b61b032ac6e222763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Deep learning</topic><topic>Hyperspectral imaging</topic><topic>Open surgery</topic><topic>Organ segmentation</topic><topic>Semantic scene segmentation</topic><topic>Surgical data science</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Seidlitz, Silvia</creatorcontrib><creatorcontrib>Sellner, Jan</creatorcontrib><creatorcontrib>Odenthal, Jan</creatorcontrib><creatorcontrib>Özdemir, Berkin</creatorcontrib><creatorcontrib>Studier-Fischer, Alexander</creatorcontrib><creatorcontrib>Knödler, Samuel</creatorcontrib><creatorcontrib>Ayala, Leonardo</creatorcontrib><creatorcontrib>Adler, Tim J.</creatorcontrib><creatorcontrib>Kenngott, Hannes G.</creatorcontrib><creatorcontrib>Tizabi, Minu</creatorcontrib><creatorcontrib>Wagner, Martin</creatorcontrib><creatorcontrib>Nickel, Felix</creatorcontrib><creatorcontrib>Müller-Stich, Beat P.</creatorcontrib><creatorcontrib>Maier-Hein, Lena</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Medical image analysis</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Seidlitz, Silvia</au><au>Sellner, Jan</au><au>Odenthal, Jan</au><au>Özdemir, Berkin</au><au>Studier-Fischer, Alexander</au><au>Knödler, Samuel</au><au>Ayala, Leonardo</au><au>Adler, Tim J.</au><au>Kenngott, Hannes G.</au><au>Tizabi, Minu</au><au>Wagner, Martin</au><au>Nickel, Felix</au><au>Müller-Stich, Beat P.</au><au>Maier-Hein, Lena</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust deep learning-based semantic organ segmentation in hyperspectral images</atitle><jtitle>Medical image analysis</jtitle><addtitle>Med Image Anal</addtitle><date>2022-08-01</date><risdate>2022</risdate><volume>80</volume><spage>102488</spage><epage>102488</epage><pages>102488-102488</pages><artnum>102488</artnum><issn>1361-8415</issn><eissn>1361-8423</eissn><abstract>•First study of optimal input modality and spatial granularity for organ segmentation.•Validation set of unprecedented size featuring 506 images annotated with 19 classes.•Deep learning-based hyperspectral image segmentation reaches inter-rater performance.•Benefit of HSI-based segmentation over RGB & processed HSI higher with smaller input.•Organ segmentation improves with increased spatial context regardless of modality.
[Display omitted]
Semantic image segmentation is an important prerequisite for context-awareness and autonomous robotics in surgery. The state of the art has focused on conventional RGB video data acquired during minimally invasive surgery, but full-scene semantic segmentation based on spectral imaging data and obtained during open surgery has received almost no attention to date. To address this gap in the literature, we are investigating the following research questions based on hyperspectral imaging (HSI) data of pigs acquired in an open surgery setting: (1) What is an adequate representation of HSI data for neural network-based fully automated organ segmentation, especially with respect to the spatial granularity of the data (pixels vs. superpixels vs. patches vs. full images)? (2) Is there a benefit of using HSI data compared to other modalities, namely RGB data and processed HSI data (e.g. tissue parameters like oxygenation), when performing semantic organ segmentation? According to a comprehensive validation study based on 506 HSI images from 20 pigs, annotated with a total of 19 classes, deep learning-based segmentation performance increases — consistently across modalities — with the spatial context of the input data. Unprocessed HSI data offers an advantage over RGB data or processed data from the camera provider, with the advantage increasing with decreasing size of the input to the neural network. Maximum performance (HSI applied to whole images) yielded a mean DSC of 0.90 ((standard deviation (SD)) 0.04), which is in the range of the inter-rater variability (DSC of 0.89 ((standard deviation (SD)) 0.07)). We conclude that HSI could become a powerful image modality for fully-automatic surgical scene understanding with many advantages over traditional imaging, including the ability to recover additional functional tissue information. Our code and pre-trained models are available at https://github.com/IMSY-DKFZ/htc.</abstract><cop>Netherlands</cop><pub>Elsevier B.V</pub><pmid>35667327</pmid><doi>10.1016/j.media.2022.102488</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0003-4469-8343</orcidid><orcidid>https://orcid.org/0000-0002-1122-4793</orcidid><orcidid>https://orcid.org/0000-0001-8927-4830</orcidid><orcidid>https://orcid.org/0000-0001-8682-9300</orcidid><orcidid>https://orcid.org/0000-0002-3574-2085</orcidid><orcidid>https://orcid.org/0000-0002-9831-9110</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1361-8415 |
ispartof | Medical image analysis, 2022-08, Vol.80, p.102488-102488, Article 102488 |
issn | 1361-8415 1361-8423 |
language | eng |
recordid | cdi_proquest_miscellaneous_2674002663 |
source | Access via ScienceDirect (Elsevier) |
subjects | Deep learning Hyperspectral imaging Open surgery Organ segmentation Semantic scene segmentation Surgical data science |
title | Robust deep learning-based semantic organ segmentation in hyperspectral images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T12%3A16%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20deep%20learning-based%20semantic%20organ%20segmentation%20in%20hyperspectral%20images&rft.jtitle=Medical%20image%20analysis&rft.au=Seidlitz,%20Silvia&rft.date=2022-08-01&rft.volume=80&rft.spage=102488&rft.epage=102488&rft.pages=102488-102488&rft.artnum=102488&rft.issn=1361-8415&rft.eissn=1361-8423&rft_id=info:doi/10.1016/j.media.2022.102488&rft_dat=%3Cproquest_cross%3E2674002663%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2674002663&rft_id=info:pmid/35667327&rft_els_id=S1361841522001359&rfr_iscdi=true |