Development of a deep residual learning algorithm to screen for glaucoma from fundus photography
The Purpose of the study was to develop a deep residual learning algorithm to screen for glaucoma from fundus photography and measure its diagnostic performance compared to Residents in Ophthalmology. A training dataset consisted of 1,364 color fundus photographs with glaucomatous indications and 1,...
Gespeichert in:
Veröffentlicht in: | Scientific reports 2018-10, Vol.8 (1), p.14665-9, Article 14665 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 9 |
---|---|
container_issue | 1 |
container_start_page | 14665 |
container_title | Scientific reports |
container_volume | 8 |
creator | Shibata, Naoto Tanito, Masaki Mitsuhashi, Keita Fujino, Yuri Matsuura, Masato Murata, Hiroshi Asaoka, Ryo |
description | The Purpose of the study was to develop a deep residual learning algorithm to screen for glaucoma from fundus photography and measure its diagnostic performance compared to Residents in Ophthalmology. A training dataset consisted of 1,364 color fundus photographs with glaucomatous indications and 1,768 color fundus photographs without glaucomatous features. A testing dataset consisted of 60 eyes of 60 glaucoma patients and 50 eyes of 50 normal subjects. Using the training dataset, a deep learning algorithm known as Deep Residual Learning for Image Recognition (ResNet) was developed to discriminate glaucoma, and its diagnostic accuracy was validated in the testing dataset, using the area under the receiver operating characteristic curve (AROC). The Deep Residual Learning for Image Recognition was constructed using the training dataset and validated using the testing dataset. The presence of glaucoma in the testing dataset was also confirmed by three Residents in Ophthalmology. The deep learning algorithm achieved significantly higher diagnostic performance compared to Residents in Ophthalmology; with ResNet, the AROC from all testing data was 96.5 (95% confidence interval [CI]: 93.5 to 99.6)% while the AROCs obtained by the three Residents were between 72.6% and 91.2%. |
doi_str_mv | 10.1038/s41598-018-33013-w |
format | Article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_6168579</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2116117375</sourcerecordid><originalsourceid>FETCH-LOGICAL-c584t-6c02251f63b00fe48bc18c7403b895174fc64b9cc9e9ea30f442350fab6693aa3</originalsourceid><addsrcrecordid>eNp9kc1O3TAQha0KVBDwAl1UltiwCfg3iTdIFbSAhMSmXbuO7zg3yLFTOwHx9vXtpRS6wJuxNN8c-8xB6BMlp5Tw9iwLKlVbEdpWnBPKq8cPaJ8RISvGGdt5dd9DRznfk3IkU4Kqj2iPE9YoKcU--nkJD-DjNEKYcXTY4BXAhBPkYbUYjz2YFIbQY-P7mIZ5PeI54mwTQMAuJtx7s9g4GuxSHLFbwmrJeFrHOfbJTOunQ7TrjM9w9FwP0I9vX79fXFe3d1c3F19uKytbMVe1JYxJ6mreEeJAtJ2lrW0E4V2rJG2Es7XolLUKFBhOnBCMS-JMV9eKG8MP0PlWd1q6EVa2-EnG6ykNo0lPOppBv-2EYa37-KBrWreyUUXg5FkgxV8L5FmPQ7bgvQkQl6wZpTWlDW9kQY__Q-_jkkKxt6Fkw8uqN4JsS9kUc07gXj5Did5kqLcZ6pKh_pOhfixDn1_beBn5m1gB-BbIpRV6SP_efkf2N6j6qRs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2115730059</pqid></control><display><type>article</type><title>Development of a deep residual learning algorithm to screen for glaucoma from fundus photography</title><source>DOAJ Directory of Open Access Journals</source><source>Springer Nature OA Free Journals</source><source>Nature Free</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><source>Free Full-Text Journals in Chemistry</source><creator>Shibata, Naoto ; Tanito, Masaki ; Mitsuhashi, Keita ; Fujino, Yuri ; Matsuura, Masato ; Murata, Hiroshi ; Asaoka, Ryo</creator><creatorcontrib>Shibata, Naoto ; Tanito, Masaki ; Mitsuhashi, Keita ; Fujino, Yuri ; Matsuura, Masato ; Murata, Hiroshi ; Asaoka, Ryo</creatorcontrib><description>The Purpose of the study was to develop a deep residual learning algorithm to screen for glaucoma from fundus photography and measure its diagnostic performance compared to Residents in Ophthalmology. A training dataset consisted of 1,364 color fundus photographs with glaucomatous indications and 1,768 color fundus photographs without glaucomatous features. A testing dataset consisted of 60 eyes of 60 glaucoma patients and 50 eyes of 50 normal subjects. Using the training dataset, a deep learning algorithm known as Deep Residual Learning for Image Recognition (ResNet) was developed to discriminate glaucoma, and its diagnostic accuracy was validated in the testing dataset, using the area under the receiver operating characteristic curve (AROC). The Deep Residual Learning for Image Recognition was constructed using the training dataset and validated using the testing dataset. The presence of glaucoma in the testing dataset was also confirmed by three Residents in Ophthalmology. The deep learning algorithm achieved significantly higher diagnostic performance compared to Residents in Ophthalmology; with ResNet, the AROC from all testing data was 96.5 (95% confidence interval [CI]: 93.5 to 99.6)% while the AROCs obtained by the three Residents were between 72.6% and 91.2%.</description><identifier>ISSN: 2045-2322</identifier><identifier>EISSN: 2045-2322</identifier><identifier>DOI: 10.1038/s41598-018-33013-w</identifier><identifier>PMID: 30279554</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>692/308 ; 692/308/575 ; Algorithms ; Color ; Datasets ; Glaucoma ; Humanities and Social Sciences ; multidisciplinary ; Photography ; Science ; Science (multidisciplinary) ; Training ; Weights & measures</subject><ispartof>Scientific reports, 2018-10, Vol.8 (1), p.14665-9, Article 14665</ispartof><rights>The Author(s) 2018</rights><rights>2018. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c584t-6c02251f63b00fe48bc18c7403b895174fc64b9cc9e9ea30f442350fab6693aa3</citedby><cites>FETCH-LOGICAL-c584t-6c02251f63b00fe48bc18c7403b895174fc64b9cc9e9ea30f442350fab6693aa3</cites><orcidid>0000-0001-6082-0738</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6168579/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6168579/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,864,885,27923,27924,41119,42188,51575,53790,53792</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30279554$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Shibata, Naoto</creatorcontrib><creatorcontrib>Tanito, Masaki</creatorcontrib><creatorcontrib>Mitsuhashi, Keita</creatorcontrib><creatorcontrib>Fujino, Yuri</creatorcontrib><creatorcontrib>Matsuura, Masato</creatorcontrib><creatorcontrib>Murata, Hiroshi</creatorcontrib><creatorcontrib>Asaoka, Ryo</creatorcontrib><title>Development of a deep residual learning algorithm to screen for glaucoma from fundus photography</title><title>Scientific reports</title><addtitle>Sci Rep</addtitle><addtitle>Sci Rep</addtitle><description>The Purpose of the study was to develop a deep residual learning algorithm to screen for glaucoma from fundus photography and measure its diagnostic performance compared to Residents in Ophthalmology. A training dataset consisted of 1,364 color fundus photographs with glaucomatous indications and 1,768 color fundus photographs without glaucomatous features. A testing dataset consisted of 60 eyes of 60 glaucoma patients and 50 eyes of 50 normal subjects. Using the training dataset, a deep learning algorithm known as Deep Residual Learning for Image Recognition (ResNet) was developed to discriminate glaucoma, and its diagnostic accuracy was validated in the testing dataset, using the area under the receiver operating characteristic curve (AROC). The Deep Residual Learning for Image Recognition was constructed using the training dataset and validated using the testing dataset. The presence of glaucoma in the testing dataset was also confirmed by three Residents in Ophthalmology. The deep learning algorithm achieved significantly higher diagnostic performance compared to Residents in Ophthalmology; with ResNet, the AROC from all testing data was 96.5 (95% confidence interval [CI]: 93.5 to 99.6)% while the AROCs obtained by the three Residents were between 72.6% and 91.2%.</description><subject>692/308</subject><subject>692/308/575</subject><subject>Algorithms</subject><subject>Color</subject><subject>Datasets</subject><subject>Glaucoma</subject><subject>Humanities and Social Sciences</subject><subject>multidisciplinary</subject><subject>Photography</subject><subject>Science</subject><subject>Science (multidisciplinary)</subject><subject>Training</subject><subject>Weights & measures</subject><issn>2045-2322</issn><issn>2045-2322</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kc1O3TAQha0KVBDwAl1UltiwCfg3iTdIFbSAhMSmXbuO7zg3yLFTOwHx9vXtpRS6wJuxNN8c-8xB6BMlp5Tw9iwLKlVbEdpWnBPKq8cPaJ8RISvGGdt5dd9DRznfk3IkU4Kqj2iPE9YoKcU--nkJD-DjNEKYcXTY4BXAhBPkYbUYjz2YFIbQY-P7mIZ5PeI54mwTQMAuJtx7s9g4GuxSHLFbwmrJeFrHOfbJTOunQ7TrjM9w9FwP0I9vX79fXFe3d1c3F19uKytbMVe1JYxJ6mreEeJAtJ2lrW0E4V2rJG2Es7XolLUKFBhOnBCMS-JMV9eKG8MP0PlWd1q6EVa2-EnG6ykNo0lPOppBv-2EYa37-KBrWreyUUXg5FkgxV8L5FmPQ7bgvQkQl6wZpTWlDW9kQY__Q-_jkkKxt6Fkw8uqN4JsS9kUc07gXj5Did5kqLcZ6pKh_pOhfixDn1_beBn5m1gB-BbIpRV6SP_efkf2N6j6qRs</recordid><startdate>20181002</startdate><enddate>20181002</enddate><creator>Shibata, Naoto</creator><creator>Tanito, Masaki</creator><creator>Mitsuhashi, Keita</creator><creator>Fujino, Yuri</creator><creator>Matsuura, Masato</creator><creator>Murata, Hiroshi</creator><creator>Asaoka, Ryo</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><scope>C6C</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88A</scope><scope>88E</scope><scope>88I</scope><scope>8FE</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>LK8</scope><scope>M0S</scope><scope>M1P</scope><scope>M2P</scope><scope>M7P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0001-6082-0738</orcidid></search><sort><creationdate>20181002</creationdate><title>Development of a deep residual learning algorithm to screen for glaucoma from fundus photography</title><author>Shibata, Naoto ; Tanito, Masaki ; Mitsuhashi, Keita ; Fujino, Yuri ; Matsuura, Masato ; Murata, Hiroshi ; Asaoka, Ryo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c584t-6c02251f63b00fe48bc18c7403b895174fc64b9cc9e9ea30f442350fab6693aa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>692/308</topic><topic>692/308/575</topic><topic>Algorithms</topic><topic>Color</topic><topic>Datasets</topic><topic>Glaucoma</topic><topic>Humanities and Social Sciences</topic><topic>multidisciplinary</topic><topic>Photography</topic><topic>Science</topic><topic>Science (multidisciplinary)</topic><topic>Training</topic><topic>Weights & measures</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shibata, Naoto</creatorcontrib><creatorcontrib>Tanito, Masaki</creatorcontrib><creatorcontrib>Mitsuhashi, Keita</creatorcontrib><creatorcontrib>Fujino, Yuri</creatorcontrib><creatorcontrib>Matsuura, Masato</creatorcontrib><creatorcontrib>Murata, Hiroshi</creatorcontrib><creatorcontrib>Asaoka, Ryo</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Biology Database (Alumni Edition)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Science Database</collection><collection>Biological Science Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Scientific reports</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shibata, Naoto</au><au>Tanito, Masaki</au><au>Mitsuhashi, Keita</au><au>Fujino, Yuri</au><au>Matsuura, Masato</au><au>Murata, Hiroshi</au><au>Asaoka, Ryo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Development of a deep residual learning algorithm to screen for glaucoma from fundus photography</atitle><jtitle>Scientific reports</jtitle><stitle>Sci Rep</stitle><addtitle>Sci Rep</addtitle><date>2018-10-02</date><risdate>2018</risdate><volume>8</volume><issue>1</issue><spage>14665</spage><epage>9</epage><pages>14665-9</pages><artnum>14665</artnum><issn>2045-2322</issn><eissn>2045-2322</eissn><abstract>The Purpose of the study was to develop a deep residual learning algorithm to screen for glaucoma from fundus photography and measure its diagnostic performance compared to Residents in Ophthalmology. A training dataset consisted of 1,364 color fundus photographs with glaucomatous indications and 1,768 color fundus photographs without glaucomatous features. A testing dataset consisted of 60 eyes of 60 glaucoma patients and 50 eyes of 50 normal subjects. Using the training dataset, a deep learning algorithm known as Deep Residual Learning for Image Recognition (ResNet) was developed to discriminate glaucoma, and its diagnostic accuracy was validated in the testing dataset, using the area under the receiver operating characteristic curve (AROC). The Deep Residual Learning for Image Recognition was constructed using the training dataset and validated using the testing dataset. The presence of glaucoma in the testing dataset was also confirmed by three Residents in Ophthalmology. The deep learning algorithm achieved significantly higher diagnostic performance compared to Residents in Ophthalmology; with ResNet, the AROC from all testing data was 96.5 (95% confidence interval [CI]: 93.5 to 99.6)% while the AROCs obtained by the three Residents were between 72.6% and 91.2%.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><pmid>30279554</pmid><doi>10.1038/s41598-018-33013-w</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0001-6082-0738</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2045-2322 |
ispartof | Scientific reports, 2018-10, Vol.8 (1), p.14665-9, Article 14665 |
issn | 2045-2322 2045-2322 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_6168579 |
source | DOAJ Directory of Open Access Journals; Springer Nature OA Free Journals; Nature Free; EZB-FREE-00999 freely available EZB journals; PubMed Central; Free Full-Text Journals in Chemistry |
subjects | 692/308 692/308/575 Algorithms Color Datasets Glaucoma Humanities and Social Sciences multidisciplinary Photography Science Science (multidisciplinary) Training Weights & measures |
title | Development of a deep residual learning algorithm to screen for glaucoma from fundus photography |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T16%3A19%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Development%20of%20a%20deep%20residual%20learning%20algorithm%20to%20screen%20for%20glaucoma%20from%20fundus%20photography&rft.jtitle=Scientific%20reports&rft.au=Shibata,%20Naoto&rft.date=2018-10-02&rft.volume=8&rft.issue=1&rft.spage=14665&rft.epage=9&rft.pages=14665-9&rft.artnum=14665&rft.issn=2045-2322&rft.eissn=2045-2322&rft_id=info:doi/10.1038/s41598-018-33013-w&rft_dat=%3Cproquest_pubme%3E2116117375%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2115730059&rft_id=info:pmid/30279554&rfr_iscdi=true |