Mixed pyramid attention network for nuclear cataract classification based on anterior segment OCT images

Nuclear cataract (NC) is a leading ocular disease globally for blindness and vision impairment. NC patients can improve their vision through cataract surgery or slow the opacity development with early intervention. Anterior segment optical coherence tomography (AS-OCT) image is an emerging ophthalmi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Health information science and systems 2022-03, Vol.10 (1), p.3, Article 3
Hauptverfasser: Zhang, Xiaoqing, Xiao, Zunjie, Li, Xiaoling, Wu, Xiao, Sun, Hanxi, Yuan, Jin, Higashita, Risa, Liu, Jiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 1
container_start_page 3
container_title Health information science and systems
container_volume 10
creator Zhang, Xiaoqing
Xiao, Zunjie
Li, Xiaoling
Wu, Xiao
Sun, Hanxi
Yuan, Jin
Higashita, Risa
Liu, Jiang
description Nuclear cataract (NC) is a leading ocular disease globally for blindness and vision impairment. NC patients can improve their vision through cataract surgery or slow the opacity development with early intervention. Anterior segment optical coherence tomography (AS-OCT) image is an emerging ophthalmic image type, which can clearly observe the whole lens structure. Recently, clinicians have been increasingly studying the correlation between NC severity levels and clinical features from the nucleus region on AS-OCT images, and the results suggested the correlation is strong. However, automatic NC classification research based on AS-OCT images has rarely been studied. This paper presents a novel mixed pyramid attention network (MPANet) to classify NC severity levels on AS-OCT images automatically. In the MPANet, we design a novel mixed pyramid attention (MPA) block, which first applies the group convolution method to enhance the feature representation difference of feature maps and then construct a mixed pyramid pooling structure to extract local-global feature representations and different feature representation types simultaneously. We conduct extensive experiments on a clinical AS-OCT image dataset and a public OCT dataset to evaluate the effectiveness of our method. The results demonstrate that our method achieves competitive classification performance through comparisons to state-of-the-art methods and previous works. Moreover, this paper also uses the class activation mapping (CAM) technique to improve our method’s interpretability of classification results.
doi_str_mv 10.1007/s13755-022-00170-2
format Article
fullrecord <record><control><sourceid>gale_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_8956780</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A698314807</galeid><sourcerecordid>A698314807</sourcerecordid><originalsourceid>FETCH-LOGICAL-c530t-681ce803a5c2234628dfcbf23b28cf391c9f65006c51bcd2d41418ec2cba76443</originalsourceid><addsrcrecordid>eNp9UctOHDEQtKKggBZ-IIfIUs5D_JzHJRJaJRCJiAs5W54eezCZsTf2bAh_T8MSHlIU--BWu6pc7SLkPWfHnLHmU-Gy0bpiQlSM8YZV4g05EEw1ldCMv31R75OjUq4Zro4Lqfk7si-1Yrxr-AG5-h7-uIFubrOdw0Dtsri4hBRpdMtNyj-pT5nGLUzOZgp2sdnCQmGypQQfsHGP7W1BDSxsXFwOyChunFGIXqwvaZjt6Moh2fN2Ku7o8VyRH1-_XK7PqvOL02_rk_MKtGRLVbccXMuk1SCEVLVoBw-9F7IXLXjZceh8rRmrQfMeBjEornjrQEBvm1opuSKfd7qbbT-7AdBFtpPZZLSRb02ywby-ieHKjOm3aTtdN_jyinx8FMjp19aVxVynbY7o2YhaSfzCTrXPqNFOzoToE4rBHAqYk7prJVctaxB1_A8U7sHNAVJ0PmD_FUHsCJBTKdn5J-OcmfvczS53g7mbh9yNQNKHlyM_Uf6mjAC5AxS8iqPLzyP9R_YO96q4sA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2643123948</pqid></control><display><type>article</type><title>Mixed pyramid attention network for nuclear cataract classification based on anterior segment OCT images</title><source>SpringerLink Journals</source><source>PubMed Central</source><creator>Zhang, Xiaoqing ; Xiao, Zunjie ; Li, Xiaoling ; Wu, Xiao ; Sun, Hanxi ; Yuan, Jin ; Higashita, Risa ; Liu, Jiang</creator><creatorcontrib>Zhang, Xiaoqing ; Xiao, Zunjie ; Li, Xiaoling ; Wu, Xiao ; Sun, Hanxi ; Yuan, Jin ; Higashita, Risa ; Liu, Jiang</creatorcontrib><description>Nuclear cataract (NC) is a leading ocular disease globally for blindness and vision impairment. NC patients can improve their vision through cataract surgery or slow the opacity development with early intervention. Anterior segment optical coherence tomography (AS-OCT) image is an emerging ophthalmic image type, which can clearly observe the whole lens structure. Recently, clinicians have been increasingly studying the correlation between NC severity levels and clinical features from the nucleus region on AS-OCT images, and the results suggested the correlation is strong. However, automatic NC classification research based on AS-OCT images has rarely been studied. This paper presents a novel mixed pyramid attention network (MPANet) to classify NC severity levels on AS-OCT images automatically. In the MPANet, we design a novel mixed pyramid attention (MPA) block, which first applies the group convolution method to enhance the feature representation difference of feature maps and then construct a mixed pyramid pooling structure to extract local-global feature representations and different feature representation types simultaneously. We conduct extensive experiments on a clinical AS-OCT image dataset and a public OCT dataset to evaluate the effectiveness of our method. The results demonstrate that our method achieves competitive classification performance through comparisons to state-of-the-art methods and previous works. Moreover, this paper also uses the class activation mapping (CAM) technique to improve our method’s interpretability of classification results.</description><identifier>ISSN: 2047-2501</identifier><identifier>EISSN: 2047-2501</identifier><identifier>DOI: 10.1007/s13755-022-00170-2</identifier><identifier>PMID: 35401971</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Bioinformatics ; Blindness ; Cataract ; Cataracts ; Classification ; Computational Biology/Bioinformatics ; Computer Science ; Datasets ; Eye diseases ; Feature extraction ; Feature maps ; Health Informatics ; Image classification ; Information Systems and Communication Service ; Medical imaging ; Optical Coherence Tomography ; Representations ; Segments ; Surgery ; Vision</subject><ispartof>Health information science and systems, 2022-03, Vol.10 (1), p.3, Article 3</ispartof><rights>The Author(s), under exclusive licence to Springer Nature Switzerland AG 2022</rights><rights>The Author(s), under exclusive licence to Springer Nature Switzerland AG 2022.</rights><rights>COPYRIGHT 2022 BioMed Central Ltd.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c530t-681ce803a5c2234628dfcbf23b28cf391c9f65006c51bcd2d41418ec2cba76443</citedby><cites>FETCH-LOGICAL-c530t-681ce803a5c2234628dfcbf23b28cf391c9f65006c51bcd2d41418ec2cba76443</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC8956780/pdf/$$EPDF$$P50$$Gpubmedcentral$$H</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC8956780/$$EHTML$$P50$$Gpubmedcentral$$H</linktohtml><link.rule.ids>230,314,723,776,780,881,27901,27902,41464,42533,51294,53766,53768</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35401971$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Xiaoqing</creatorcontrib><creatorcontrib>Xiao, Zunjie</creatorcontrib><creatorcontrib>Li, Xiaoling</creatorcontrib><creatorcontrib>Wu, Xiao</creatorcontrib><creatorcontrib>Sun, Hanxi</creatorcontrib><creatorcontrib>Yuan, Jin</creatorcontrib><creatorcontrib>Higashita, Risa</creatorcontrib><creatorcontrib>Liu, Jiang</creatorcontrib><title>Mixed pyramid attention network for nuclear cataract classification based on anterior segment OCT images</title><title>Health information science and systems</title><addtitle>Health Inf Sci Syst</addtitle><addtitle>Health Inf Sci Syst</addtitle><description>Nuclear cataract (NC) is a leading ocular disease globally for blindness and vision impairment. NC patients can improve their vision through cataract surgery or slow the opacity development with early intervention. Anterior segment optical coherence tomography (AS-OCT) image is an emerging ophthalmic image type, which can clearly observe the whole lens structure. Recently, clinicians have been increasingly studying the correlation between NC severity levels and clinical features from the nucleus region on AS-OCT images, and the results suggested the correlation is strong. However, automatic NC classification research based on AS-OCT images has rarely been studied. This paper presents a novel mixed pyramid attention network (MPANet) to classify NC severity levels on AS-OCT images automatically. In the MPANet, we design a novel mixed pyramid attention (MPA) block, which first applies the group convolution method to enhance the feature representation difference of feature maps and then construct a mixed pyramid pooling structure to extract local-global feature representations and different feature representation types simultaneously. We conduct extensive experiments on a clinical AS-OCT image dataset and a public OCT dataset to evaluate the effectiveness of our method. The results demonstrate that our method achieves competitive classification performance through comparisons to state-of-the-art methods and previous works. Moreover, this paper also uses the class activation mapping (CAM) technique to improve our method’s interpretability of classification results.</description><subject>Bioinformatics</subject><subject>Blindness</subject><subject>Cataract</subject><subject>Cataracts</subject><subject>Classification</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Eye diseases</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>Health Informatics</subject><subject>Image classification</subject><subject>Information Systems and Communication Service</subject><subject>Medical imaging</subject><subject>Optical Coherence Tomography</subject><subject>Representations</subject><subject>Segments</subject><subject>Surgery</subject><subject>Vision</subject><issn>2047-2501</issn><issn>2047-2501</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp9UctOHDEQtKKggBZ-IIfIUs5D_JzHJRJaJRCJiAs5W54eezCZsTf2bAh_T8MSHlIU--BWu6pc7SLkPWfHnLHmU-Gy0bpiQlSM8YZV4g05EEw1ldCMv31R75OjUq4Zro4Lqfk7si-1Yrxr-AG5-h7-uIFubrOdw0Dtsri4hBRpdMtNyj-pT5nGLUzOZgp2sdnCQmGypQQfsHGP7W1BDSxsXFwOyChunFGIXqwvaZjt6Moh2fN2Ku7o8VyRH1-_XK7PqvOL02_rk_MKtGRLVbccXMuk1SCEVLVoBw-9F7IXLXjZceh8rRmrQfMeBjEornjrQEBvm1opuSKfd7qbbT-7AdBFtpPZZLSRb02ywby-ieHKjOm3aTtdN_jyinx8FMjp19aVxVynbY7o2YhaSfzCTrXPqNFOzoToE4rBHAqYk7prJVctaxB1_A8U7sHNAVJ0PmD_FUHsCJBTKdn5J-OcmfvczS53g7mbh9yNQNKHlyM_Uf6mjAC5AxS8iqPLzyP9R_YO96q4sA</recordid><startdate>20220325</startdate><enddate>20220325</enddate><creator>Zhang, Xiaoqing</creator><creator>Xiao, Zunjie</creator><creator>Li, Xiaoling</creator><creator>Wu, Xiao</creator><creator>Sun, Hanxi</creator><creator>Yuan, Jin</creator><creator>Higashita, Risa</creator><creator>Liu, Jiang</creator><general>Springer International Publishing</general><general>BioMed Central Ltd</general><general>Springer Nature B.V</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>8AL</scope><scope>8FE</scope><scope>8FG</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>K9.</scope><scope>M0N</scope><scope>M0S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>5PM</scope></search><sort><creationdate>20220325</creationdate><title>Mixed pyramid attention network for nuclear cataract classification based on anterior segment OCT images</title><author>Zhang, Xiaoqing ; Xiao, Zunjie ; Li, Xiaoling ; Wu, Xiao ; Sun, Hanxi ; Yuan, Jin ; Higashita, Risa ; Liu, Jiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c530t-681ce803a5c2234628dfcbf23b28cf391c9f65006c51bcd2d41418ec2cba76443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Bioinformatics</topic><topic>Blindness</topic><topic>Cataract</topic><topic>Cataracts</topic><topic>Classification</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Eye diseases</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>Health Informatics</topic><topic>Image classification</topic><topic>Information Systems and Communication Service</topic><topic>Medical imaging</topic><topic>Optical Coherence Tomography</topic><topic>Representations</topic><topic>Segments</topic><topic>Surgery</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Xiaoqing</creatorcontrib><creatorcontrib>Xiao, Zunjie</creatorcontrib><creatorcontrib>Li, Xiaoling</creatorcontrib><creatorcontrib>Wu, Xiao</creatorcontrib><creatorcontrib>Sun, Hanxi</creatorcontrib><creatorcontrib>Yuan, Jin</creatorcontrib><creatorcontrib>Higashita, Risa</creatorcontrib><creatorcontrib>Liu, Jiang</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Computing Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Health information science and systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Xiaoqing</au><au>Xiao, Zunjie</au><au>Li, Xiaoling</au><au>Wu, Xiao</au><au>Sun, Hanxi</au><au>Yuan, Jin</au><au>Higashita, Risa</au><au>Liu, Jiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mixed pyramid attention network for nuclear cataract classification based on anterior segment OCT images</atitle><jtitle>Health information science and systems</jtitle><stitle>Health Inf Sci Syst</stitle><addtitle>Health Inf Sci Syst</addtitle><date>2022-03-25</date><risdate>2022</risdate><volume>10</volume><issue>1</issue><spage>3</spage><pages>3-</pages><artnum>3</artnum><issn>2047-2501</issn><eissn>2047-2501</eissn><abstract>Nuclear cataract (NC) is a leading ocular disease globally for blindness and vision impairment. NC patients can improve their vision through cataract surgery or slow the opacity development with early intervention. Anterior segment optical coherence tomography (AS-OCT) image is an emerging ophthalmic image type, which can clearly observe the whole lens structure. Recently, clinicians have been increasingly studying the correlation between NC severity levels and clinical features from the nucleus region on AS-OCT images, and the results suggested the correlation is strong. However, automatic NC classification research based on AS-OCT images has rarely been studied. This paper presents a novel mixed pyramid attention network (MPANet) to classify NC severity levels on AS-OCT images automatically. In the MPANet, we design a novel mixed pyramid attention (MPA) block, which first applies the group convolution method to enhance the feature representation difference of feature maps and then construct a mixed pyramid pooling structure to extract local-global feature representations and different feature representation types simultaneously. We conduct extensive experiments on a clinical AS-OCT image dataset and a public OCT dataset to evaluate the effectiveness of our method. The results demonstrate that our method achieves competitive classification performance through comparisons to state-of-the-art methods and previous works. Moreover, this paper also uses the class activation mapping (CAM) technique to improve our method’s interpretability of classification results.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><pmid>35401971</pmid><doi>10.1007/s13755-022-00170-2</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2047-2501
ispartof Health information science and systems, 2022-03, Vol.10 (1), p.3, Article 3
issn 2047-2501
2047-2501
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_8956780
source SpringerLink Journals; PubMed Central
subjects Bioinformatics
Blindness
Cataract
Cataracts
Classification
Computational Biology/Bioinformatics
Computer Science
Datasets
Eye diseases
Feature extraction
Feature maps
Health Informatics
Image classification
Information Systems and Communication Service
Medical imaging
Optical Coherence Tomography
Representations
Segments
Surgery
Vision
title Mixed pyramid attention network for nuclear cataract classification based on anterior segment OCT images
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T07%3A42%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mixed%20pyramid%20attention%20network%20for%20nuclear%20cataract%20classification%20based%20on%20anterior%20segment%20OCT%20images&rft.jtitle=Health%20information%20science%20and%20systems&rft.au=Zhang,%20Xiaoqing&rft.date=2022-03-25&rft.volume=10&rft.issue=1&rft.spage=3&rft.pages=3-&rft.artnum=3&rft.issn=2047-2501&rft.eissn=2047-2501&rft_id=info:doi/10.1007/s13755-022-00170-2&rft_dat=%3Cgale_pubme%3EA698314807%3C/gale_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2643123948&rft_id=info:pmid/35401971&rft_galeid=A698314807&rfr_iscdi=true