Learning Robust Representation for Joint Grading of Ophthalmic Diseases via Adaptive Curriculum and Feature Disentanglement
Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading causes of permanent blindness worldwide. Designing an automatic grading system with good generalization ability for DR and DME is vital in clinical practice. However, prior works either grade DR or DME independently, without cons...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-03 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Che, Haoxuan Jin, Haibo Chen, Hao |
description | Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading causes of permanent blindness worldwide. Designing an automatic grading system with good generalization ability for DR and DME is vital in clinical practice. However, prior works either grade DR or DME independently, without considering internal correlations between them, or grade them jointly by shared feature representation, yet ignoring potential generalization issues caused by difficult samples and data bias. Aiming to address these problems, we propose a framework for joint grading with the dynamic difficulty-aware weighted loss (DAW) and the dual-stream disentangled learning architecture (DETACH). Inspired by curriculum learning, DAW learns from simple samples to difficult samples dynamically via measuring difficulty adaptively. DETACH separates features of grading tasks to avoid potential emphasis on the bias. With the addition of DAW and DETACH, the model learns robust disentangled feature representations to explore internal correlations between DR and DME and achieve better grading performance. Experiments on three benchmarks show the effectiveness and robustness of our framework under both the intra-dataset and cross-dataset tests. |
doi_str_mv | 10.48550/arxiv.2207.04183 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2207_04183</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2688298824</sourcerecordid><originalsourceid>FETCH-LOGICAL-a953-4deb7f4efcf0495d2205dd3e712a4157c4eb3e21ab7ba86c44c4ed849aaa717e3</originalsourceid><addsrcrecordid>eNotkE1Lw0AQhhdBsNT-AE8ueG7d7Ec3OZZqq1IolN7DJDtptySbuLspin_etHoYZhgeXmYeQh4SNpOpUuwZ_Jc9zzhnesZkkoobMuJCJNNUcn5HJiGcGGN8rrlSYkR-NgjeWXegu7boQ6Q77DwGdBGibR2tWk8_WusiXXswF66t6LY7xiPUjS3piw0IAQM9W6ALA120Z6TL3ntb9nXfUHCGrhBi7_EKD8HuUGMzDPfktoI64OS_j8l-9bpfvk032_X7crGZQqbEVBosdCWxKismM2WGz5QxAnXCQSZKlxILgTyBQheQzksph41JZQYAOtEoxuTxL_ZqJu-8bcB_5xdD-dXQQDz9EZ1vP3sMMT-1vXfDTTmfpynPhpLiF2eza-Y</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2688298824</pqid></control><display><type>article</type><title>Learning Robust Representation for Joint Grading of Ophthalmic Diseases via Adaptive Curriculum and Feature Disentanglement</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Che, Haoxuan ; Jin, Haibo ; Chen, Hao</creator><creatorcontrib>Che, Haoxuan ; Jin, Haibo ; Chen, Hao</creatorcontrib><description>Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading causes of permanent blindness worldwide. Designing an automatic grading system with good generalization ability for DR and DME is vital in clinical practice. However, prior works either grade DR or DME independently, without considering internal correlations between them, or grade them jointly by shared feature representation, yet ignoring potential generalization issues caused by difficult samples and data bias. Aiming to address these problems, we propose a framework for joint grading with the dynamic difficulty-aware weighted loss (DAW) and the dual-stream disentangled learning architecture (DETACH). Inspired by curriculum learning, DAW learns from simple samples to difficult samples dynamically via measuring difficulty adaptively. DETACH separates features of grading tasks to avoid potential emphasis on the bias. With the addition of DAW and DETACH, the model learns robust disentangled feature representations to explore internal correlations between DR and DME and achieve better grading performance. Experiments on three benchmarks show the effectiveness and robustness of our framework under both the intra-dataset and cross-dataset tests.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2207.04183</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Bias ; Computer Science - Computer Vision and Pattern Recognition ; Curricula ; Datasets ; Detaching ; Diabetes ; Diabetic retinopathy ; Edema ; Learning ; Representations ; Robustness</subject><ispartof>arXiv.org, 2023-03</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,782,883,27912</link.rule.ids><backlink>$$Uhttps://doi.org/10.1007/978-3-031-16437-8_50$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2207.04183$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Che, Haoxuan</creatorcontrib><creatorcontrib>Jin, Haibo</creatorcontrib><creatorcontrib>Chen, Hao</creatorcontrib><title>Learning Robust Representation for Joint Grading of Ophthalmic Diseases via Adaptive Curriculum and Feature Disentanglement</title><title>arXiv.org</title><description>Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading causes of permanent blindness worldwide. Designing an automatic grading system with good generalization ability for DR and DME is vital in clinical practice. However, prior works either grade DR or DME independently, without considering internal correlations between them, or grade them jointly by shared feature representation, yet ignoring potential generalization issues caused by difficult samples and data bias. Aiming to address these problems, we propose a framework for joint grading with the dynamic difficulty-aware weighted loss (DAW) and the dual-stream disentangled learning architecture (DETACH). Inspired by curriculum learning, DAW learns from simple samples to difficult samples dynamically via measuring difficulty adaptively. DETACH separates features of grading tasks to avoid potential emphasis on the bias. With the addition of DAW and DETACH, the model learns robust disentangled feature representations to explore internal correlations between DR and DME and achieve better grading performance. Experiments on three benchmarks show the effectiveness and robustness of our framework under both the intra-dataset and cross-dataset tests.</description><subject>Bias</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Curricula</subject><subject>Datasets</subject><subject>Detaching</subject><subject>Diabetes</subject><subject>Diabetic retinopathy</subject><subject>Edema</subject><subject>Learning</subject><subject>Representations</subject><subject>Robustness</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkE1Lw0AQhhdBsNT-AE8ueG7d7Ec3OZZqq1IolN7DJDtptySbuLspin_etHoYZhgeXmYeQh4SNpOpUuwZ_Jc9zzhnesZkkoobMuJCJNNUcn5HJiGcGGN8rrlSYkR-NgjeWXegu7boQ6Q77DwGdBGibR2tWk8_WusiXXswF66t6LY7xiPUjS3piw0IAQM9W6ALA120Z6TL3ntb9nXfUHCGrhBi7_EKD8HuUGMzDPfktoI64OS_j8l-9bpfvk032_X7crGZQqbEVBosdCWxKismM2WGz5QxAnXCQSZKlxILgTyBQheQzksph41JZQYAOtEoxuTxL_ZqJu-8bcB_5xdD-dXQQDz9EZ1vP3sMMT-1vXfDTTmfpynPhpLiF2eza-Y</recordid><startdate>20230326</startdate><enddate>20230326</enddate><creator>Che, Haoxuan</creator><creator>Jin, Haibo</creator><creator>Chen, Hao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230326</creationdate><title>Learning Robust Representation for Joint Grading of Ophthalmic Diseases via Adaptive Curriculum and Feature Disentanglement</title><author>Che, Haoxuan ; Jin, Haibo ; Chen, Hao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a953-4deb7f4efcf0495d2205dd3e712a4157c4eb3e21ab7ba86c44c4ed849aaa717e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Bias</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Curricula</topic><topic>Datasets</topic><topic>Detaching</topic><topic>Diabetes</topic><topic>Diabetic retinopathy</topic><topic>Edema</topic><topic>Learning</topic><topic>Representations</topic><topic>Robustness</topic><toplevel>online_resources</toplevel><creatorcontrib>Che, Haoxuan</creatorcontrib><creatorcontrib>Jin, Haibo</creatorcontrib><creatorcontrib>Chen, Hao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Che, Haoxuan</au><au>Jin, Haibo</au><au>Chen, Hao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Robust Representation for Joint Grading of Ophthalmic Diseases via Adaptive Curriculum and Feature Disentanglement</atitle><jtitle>arXiv.org</jtitle><date>2023-03-26</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading causes of permanent blindness worldwide. Designing an automatic grading system with good generalization ability for DR and DME is vital in clinical practice. However, prior works either grade DR or DME independently, without considering internal correlations between them, or grade them jointly by shared feature representation, yet ignoring potential generalization issues caused by difficult samples and data bias. Aiming to address these problems, we propose a framework for joint grading with the dynamic difficulty-aware weighted loss (DAW) and the dual-stream disentangled learning architecture (DETACH). Inspired by curriculum learning, DAW learns from simple samples to difficult samples dynamically via measuring difficulty adaptively. DETACH separates features of grading tasks to avoid potential emphasis on the bias. With the addition of DAW and DETACH, the model learns robust disentangled feature representations to explore internal correlations between DR and DME and achieve better grading performance. Experiments on three benchmarks show the effectiveness and robustness of our framework under both the intra-dataset and cross-dataset tests.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2207.04183</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2207_04183 |
source | arXiv.org; Free E- Journals |
subjects | Bias Computer Science - Computer Vision and Pattern Recognition Curricula Datasets Detaching Diabetes Diabetic retinopathy Edema Learning Representations Robustness |
title | Learning Robust Representation for Joint Grading of Ophthalmic Diseases via Adaptive Curriculum and Feature Disentanglement |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T03%3A04%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Robust%20Representation%20for%20Joint%20Grading%20of%20Ophthalmic%20Diseases%20via%20Adaptive%20Curriculum%20and%20Feature%20Disentanglement&rft.jtitle=arXiv.org&rft.au=Che,%20Haoxuan&rft.date=2023-03-26&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2207.04183&rft_dat=%3Cproquest_arxiv%3E2688298824%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2688298824&rft_id=info:pmid/&rfr_iscdi=true |