Learnable color space conversion and fusion for stain normalization in pathology images
Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic syste...
Gespeichert in:
Veröffentlicht in: | Medical image analysis 2025-04, Vol.101, p.103424, Article 103424 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | 103424 |
container_title | Medical image analysis |
container_volume | 101 |
creator | Ke, Jing Zhou, Yijin Shen, Yiqing Guo, Yi Liu, Ning Han, Xiaodan Shen, Dinggang |
description | Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.
•Introduces an automatic color-space variable optimization, eliminating manual adjustments and enhancing adaptability.•Employs a self-attention mechanism to optimally combine features from multiple color spaces, improving image interpretation accuracy.•Implements a dynamic template generation strategy from the source image set, ensuring flexibility and effectiveness across various analysis tasks.•Achieves substantial computational efficiency and faster inference by directly linking nor |
doi_str_mv | 10.1016/j.media.2024.103424 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_3150522289</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S1361841524003499</els_id><sourcerecordid>3150522289</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1543-3dd9be5f52f5d383cc71879763168d5078176c534315df812d1778b94f51db963</originalsourceid><addsrcrecordid>eNp9UMlOwzAUtBCIlsIXIKEcuaR4jZMDB1SxSZW4gDhaju0UV0lc7KRS-XqcpvTI6Y3mzbxlALhGcI4gyu7W88ZoK-cYYhoZQjE9AVNEMpTmFJPTI0ZsAi5CWEMIOaXwHExIwSmknEzB59JI38qyNolytfNJ2Eg14HZrfLCuTWSrk6rfw2rod9K2Set8I2v7I7uBj8RGdl_Rv9oltpErEy7BWSXrYK4OdQY-nh7fFy_p8u35dfGwTBVilKRE66I0rGK4YprkRCmOcl7wjKAs1wzyHPFMMUIJYrrKEdaI87wsaMWQLouMzMDtOHfj3XdvQicaG5Spa9ka1wcRfZBhjPMiSskoVd6F4E0lNj4e63cCQTEkKtZin6gYEhVjotF1c1jQl7F79PxFGAX3o8DEN7fWeBGUNa2Kk7xRndDO_rvgFx2yh1Y</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3150522289</pqid></control><display><type>article</type><title>Learnable color space conversion and fusion for stain normalization in pathology images</title><source>Elsevier ScienceDirect Journals</source><creator>Ke, Jing ; Zhou, Yijin ; Shen, Yiqing ; Guo, Yi ; Liu, Ning ; Han, Xiaodan ; Shen, Dinggang</creator><creatorcontrib>Ke, Jing ; Zhou, Yijin ; Shen, Yiqing ; Guo, Yi ; Liu, Ning ; Han, Xiaodan ; Shen, Dinggang</creatorcontrib><description>Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.
•Introduces an automatic color-space variable optimization, eliminating manual adjustments and enhancing adaptability.•Employs a self-attention mechanism to optimally combine features from multiple color spaces, improving image interpretation accuracy.•Implements a dynamic template generation strategy from the source image set, ensuring flexibility and effectiveness across various analysis tasks.•Achieves substantial computational efficiency and faster inference by directly linking normalization to diagnostic networks without intermediate steps.</description><identifier>ISSN: 1361-8415</identifier><identifier>ISSN: 1361-8423</identifier><identifier>EISSN: 1361-8423</identifier><identifier>DOI: 10.1016/j.media.2024.103424</identifier><identifier>PMID: 39740473</identifier><language>eng</language><publisher>Netherlands: Elsevier B.V</publisher><subject>Learnable color space conversion ; Pathology image analysis ; Stain normalization</subject><ispartof>Medical image analysis, 2025-04, Vol.101, p.103424, Article 103424</ispartof><rights>2025 Elsevier B.V.</rights><rights>Copyright © 2024. Published by Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c1543-3dd9be5f52f5d383cc71879763168d5078176c534315df812d1778b94f51db963</cites><orcidid>0000-0001-7866-3339 ; 0000-0003-3162-3502</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S1361841524003499$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39740473$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ke, Jing</creatorcontrib><creatorcontrib>Zhou, Yijin</creatorcontrib><creatorcontrib>Shen, Yiqing</creatorcontrib><creatorcontrib>Guo, Yi</creatorcontrib><creatorcontrib>Liu, Ning</creatorcontrib><creatorcontrib>Han, Xiaodan</creatorcontrib><creatorcontrib>Shen, Dinggang</creatorcontrib><title>Learnable color space conversion and fusion for stain normalization in pathology images</title><title>Medical image analysis</title><addtitle>Med Image Anal</addtitle><description>Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.
•Introduces an automatic color-space variable optimization, eliminating manual adjustments and enhancing adaptability.•Employs a self-attention mechanism to optimally combine features from multiple color spaces, improving image interpretation accuracy.•Implements a dynamic template generation strategy from the source image set, ensuring flexibility and effectiveness across various analysis tasks.•Achieves substantial computational efficiency and faster inference by directly linking normalization to diagnostic networks without intermediate steps.</description><subject>Learnable color space conversion</subject><subject>Pathology image analysis</subject><subject>Stain normalization</subject><issn>1361-8415</issn><issn>1361-8423</issn><issn>1361-8423</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><recordid>eNp9UMlOwzAUtBCIlsIXIKEcuaR4jZMDB1SxSZW4gDhaju0UV0lc7KRS-XqcpvTI6Y3mzbxlALhGcI4gyu7W88ZoK-cYYhoZQjE9AVNEMpTmFJPTI0ZsAi5CWEMIOaXwHExIwSmknEzB59JI38qyNolytfNJ2Eg14HZrfLCuTWSrk6rfw2rod9K2Set8I2v7I7uBj8RGdl_Rv9oltpErEy7BWSXrYK4OdQY-nh7fFy_p8u35dfGwTBVilKRE66I0rGK4YprkRCmOcl7wjKAs1wzyHPFMMUIJYrrKEdaI87wsaMWQLouMzMDtOHfj3XdvQicaG5Spa9ka1wcRfZBhjPMiSskoVd6F4E0lNj4e63cCQTEkKtZin6gYEhVjotF1c1jQl7F79PxFGAX3o8DEN7fWeBGUNa2Kk7xRndDO_rvgFx2yh1Y</recordid><startdate>202504</startdate><enddate>202504</enddate><creator>Ke, Jing</creator><creator>Zhou, Yijin</creator><creator>Shen, Yiqing</creator><creator>Guo, Yi</creator><creator>Liu, Ning</creator><creator>Han, Xiaodan</creator><creator>Shen, Dinggang</creator><general>Elsevier B.V</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-7866-3339</orcidid><orcidid>https://orcid.org/0000-0003-3162-3502</orcidid></search><sort><creationdate>202504</creationdate><title>Learnable color space conversion and fusion for stain normalization in pathology images</title><author>Ke, Jing ; Zhou, Yijin ; Shen, Yiqing ; Guo, Yi ; Liu, Ning ; Han, Xiaodan ; Shen, Dinggang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1543-3dd9be5f52f5d383cc71879763168d5078176c534315df812d1778b94f51db963</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Learnable color space conversion</topic><topic>Pathology image analysis</topic><topic>Stain normalization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ke, Jing</creatorcontrib><creatorcontrib>Zhou, Yijin</creatorcontrib><creatorcontrib>Shen, Yiqing</creatorcontrib><creatorcontrib>Guo, Yi</creatorcontrib><creatorcontrib>Liu, Ning</creatorcontrib><creatorcontrib>Han, Xiaodan</creatorcontrib><creatorcontrib>Shen, Dinggang</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Medical image analysis</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ke, Jing</au><au>Zhou, Yijin</au><au>Shen, Yiqing</au><au>Guo, Yi</au><au>Liu, Ning</au><au>Han, Xiaodan</au><au>Shen, Dinggang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learnable color space conversion and fusion for stain normalization in pathology images</atitle><jtitle>Medical image analysis</jtitle><addtitle>Med Image Anal</addtitle><date>2025-04</date><risdate>2025</risdate><volume>101</volume><spage>103424</spage><pages>103424-</pages><artnum>103424</artnum><issn>1361-8415</issn><issn>1361-8423</issn><eissn>1361-8423</eissn><abstract>Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.
•Introduces an automatic color-space variable optimization, eliminating manual adjustments and enhancing adaptability.•Employs a self-attention mechanism to optimally combine features from multiple color spaces, improving image interpretation accuracy.•Implements a dynamic template generation strategy from the source image set, ensuring flexibility and effectiveness across various analysis tasks.•Achieves substantial computational efficiency and faster inference by directly linking normalization to diagnostic networks without intermediate steps.</abstract><cop>Netherlands</cop><pub>Elsevier B.V</pub><pmid>39740473</pmid><doi>10.1016/j.media.2024.103424</doi><orcidid>https://orcid.org/0000-0001-7866-3339</orcidid><orcidid>https://orcid.org/0000-0003-3162-3502</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1361-8415 |
ispartof | Medical image analysis, 2025-04, Vol.101, p.103424, Article 103424 |
issn | 1361-8415 1361-8423 1361-8423 |
language | eng |
recordid | cdi_proquest_miscellaneous_3150522289 |
source | Elsevier ScienceDirect Journals |
subjects | Learnable color space conversion Pathology image analysis Stain normalization |
title | Learnable color space conversion and fusion for stain normalization in pathology images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T13%3A02%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learnable%20color%20space%20conversion%20and%20fusion%20for%20stain%20normalization%20in%20pathology%20images&rft.jtitle=Medical%20image%20analysis&rft.au=Ke,%20Jing&rft.date=2025-04&rft.volume=101&rft.spage=103424&rft.pages=103424-&rft.artnum=103424&rft.issn=1361-8415&rft.eissn=1361-8423&rft_id=info:doi/10.1016/j.media.2024.103424&rft_dat=%3Cproquest_cross%3E3150522289%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3150522289&rft_id=info:pmid/39740473&rft_els_id=S1361841524003499&rfr_iscdi=true |