A deep learning-based automated diagnostic system for classifying mammographic lesions
Screening mammography has led to reduced breast cancer-specific mortality and is recommended worldwide. However, the resultant doctors' workload of reading mammographic scans needs to be addressed. Although computer-aided detection (CAD) systems have been developed to support readers, the findi...
Gespeichert in:
Veröffentlicht in: | Medicine (Baltimore) 2020-07, Vol.99 (27), p.e20977-e20977 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | e20977 |
---|---|
container_issue | 27 |
container_start_page | e20977 |
container_title | Medicine (Baltimore) |
container_volume | 99 |
creator | Yamaguchi, Takeshi Inoue, Kenichi Tsunoda, Hiroko Uematsu, Takayoshi Shinohara, Norimitsu Mukai, Hirofumi |
description | Screening mammography has led to reduced breast cancer-specific mortality and is recommended worldwide. However, the resultant doctors' workload of reading mammographic scans needs to be addressed. Although computer-aided detection (CAD) systems have been developed to support readers, the findings are conflicting regarding whether traditional CAD systems improve reading performance. Rapid progress in the artificial intelligence (AI) field has led to the advent of newer CAD systems using deep learning-based algorithms which have the potential to reach human performance levels. Those systems, however, have been developed using mammography images mainly from women in western countries. Because Asian women characteristically have higher-density breasts, it is uncertain whether those AI systems can apply to Japanese women. In this study, we will construct a deep learning-based CAD system trained using mammography images from a large number of Japanese women with high quality reading.
We will collect digital mammography images taken for screening or diagnostic purposes at multiple institutions in Japan. A total of 15,000 images, consisting of 5000 images with breast cancer and 10,000 images with benign lesions, will be collected. At least 1000 images of normal breasts will also be collected for use as reference data. With these data, we will construct a deep learning-based AI system to detect breast cancer on mammograms. The primary endpoint will be the sensitivity and specificity of the AI system with the test image set.
When the ability of AI reading is shown to be on a par with that of human reading, images of normal breasts or benign lesions that do not have to be read by a human can be selected by AI beforehand. Our AI might work well in Asian women who have similar breast density, size, and shape to those of Japanese women.
UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/. |
doi_str_mv | 10.1097/MD.0000000000020977 |
format | Article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_7337553</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2421123394</sourcerecordid><originalsourceid>FETCH-LOGICAL-c4501-1932c0ac1fbc4c0eb17b17e55ad5da3929b11b4d4d4b2514bafd1c26cd5e08a53</originalsourceid><addsrcrecordid>eNpdUV1PFDEUbYgGVuAXmJh59GWwn1v7YkJA1ATCC_La3Gnv7FY707Wdkey_p7iAaG_TNueec26TQ8hbRk8YNfrD1fkJ_bt4hfQeWTAllq0yS_mKLCqqWm20PCBvSvlBKROay31yIPiSG834gtyeNh5x00SEPIZx1XZQ0DcwT2mAqb58gNWYyhRcU7ZlwqHpU25chFJCv62KZoBhSKsMm3XlRCwhjeWIvO4hFjx-vA_J94vPN2df28vrL9_OTi9bJxVlLTOCOwqO9Z2TjmLHdN2oFHjlQRhuOsY66Wt1XDHZQe-Z40vnFdKPoMQh-bTz3czdgN7hOGWIdpPDAHlrEwT7b2cMa7tKv60WQislqsH7R4Ocfs1YJjuE4jBGGDHNxXLJGeNCGFmpYkd1OZWSsX8ew6h9SMRendv_E6mqdy9_-Kx5iqAS5I5wl-KEufyM8x1mu0aI0_qPn9KGt7z6UV2PtiKGiXupKZin</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2421123394</pqid></control><display><type>article</type><title>A deep learning-based automated diagnostic system for classifying mammographic lesions</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Wolters Kluwer Open Health</source><source>IngentaConnect Free/Open Access Journals</source><source>PubMed Central</source><source>Alma/SFX Local Collection</source><creator>Yamaguchi, Takeshi ; Inoue, Kenichi ; Tsunoda, Hiroko ; Uematsu, Takayoshi ; Shinohara, Norimitsu ; Mukai, Hirofumi</creator><creatorcontrib>Yamaguchi, Takeshi ; Inoue, Kenichi ; Tsunoda, Hiroko ; Uematsu, Takayoshi ; Shinohara, Norimitsu ; Mukai, Hirofumi</creatorcontrib><description>Screening mammography has led to reduced breast cancer-specific mortality and is recommended worldwide. However, the resultant doctors' workload of reading mammographic scans needs to be addressed. Although computer-aided detection (CAD) systems have been developed to support readers, the findings are conflicting regarding whether traditional CAD systems improve reading performance. Rapid progress in the artificial intelligence (AI) field has led to the advent of newer CAD systems using deep learning-based algorithms which have the potential to reach human performance levels. Those systems, however, have been developed using mammography images mainly from women in western countries. Because Asian women characteristically have higher-density breasts, it is uncertain whether those AI systems can apply to Japanese women. In this study, we will construct a deep learning-based CAD system trained using mammography images from a large number of Japanese women with high quality reading.
We will collect digital mammography images taken for screening or diagnostic purposes at multiple institutions in Japan. A total of 15,000 images, consisting of 5000 images with breast cancer and 10,000 images with benign lesions, will be collected. At least 1000 images of normal breasts will also be collected for use as reference data. With these data, we will construct a deep learning-based AI system to detect breast cancer on mammograms. The primary endpoint will be the sensitivity and specificity of the AI system with the test image set.
When the ability of AI reading is shown to be on a par with that of human reading, images of normal breasts or benign lesions that do not have to be read by a human can be selected by AI beforehand. Our AI might work well in Asian women who have similar breast density, size, and shape to those of Japanese women.
UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.</description><identifier>ISSN: 0025-7974</identifier><identifier>EISSN: 1536-5964</identifier><identifier>DOI: 10.1097/MD.0000000000020977</identifier><identifier>PMID: 32629712</identifier><language>eng</language><publisher>United States: the Author(s). Published by Wolters Kluwer Health, Inc</publisher><subject>Breast Neoplasms - diagnostic imaging ; Case-Control Studies ; Deep Learning ; Female ; Humans ; Japan ; Mammography - methods ; Radiographic Image Interpretation, Computer-Assisted - methods ; Retrospective Studies ; Study Protocol Clinical Trial</subject><ispartof>Medicine (Baltimore), 2020-07, Vol.99 (27), p.e20977-e20977</ispartof><rights>the Author(s). Published by Wolters Kluwer Health, Inc.</rights><rights>Copyright © 2020 the Author(s). Published by Wolters Kluwer Health, Inc. 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c4501-1932c0ac1fbc4c0eb17b17e55ad5da3929b11b4d4d4b2514bafd1c26cd5e08a53</citedby><cites>FETCH-LOGICAL-c4501-1932c0ac1fbc4c0eb17b17e55ad5da3929b11b4d4d4b2514bafd1c26cd5e08a53</cites><orcidid>0000-0003-3306-314 ; 0000-0003-3306-314X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7337553/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7337553/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,860,881,27903,27904,53770,53772</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32629712$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yamaguchi, Takeshi</creatorcontrib><creatorcontrib>Inoue, Kenichi</creatorcontrib><creatorcontrib>Tsunoda, Hiroko</creatorcontrib><creatorcontrib>Uematsu, Takayoshi</creatorcontrib><creatorcontrib>Shinohara, Norimitsu</creatorcontrib><creatorcontrib>Mukai, Hirofumi</creatorcontrib><title>A deep learning-based automated diagnostic system for classifying mammographic lesions</title><title>Medicine (Baltimore)</title><addtitle>Medicine (Baltimore)</addtitle><description>Screening mammography has led to reduced breast cancer-specific mortality and is recommended worldwide. However, the resultant doctors' workload of reading mammographic scans needs to be addressed. Although computer-aided detection (CAD) systems have been developed to support readers, the findings are conflicting regarding whether traditional CAD systems improve reading performance. Rapid progress in the artificial intelligence (AI) field has led to the advent of newer CAD systems using deep learning-based algorithms which have the potential to reach human performance levels. Those systems, however, have been developed using mammography images mainly from women in western countries. Because Asian women characteristically have higher-density breasts, it is uncertain whether those AI systems can apply to Japanese women. In this study, we will construct a deep learning-based CAD system trained using mammography images from a large number of Japanese women with high quality reading.
We will collect digital mammography images taken for screening or diagnostic purposes at multiple institutions in Japan. A total of 15,000 images, consisting of 5000 images with breast cancer and 10,000 images with benign lesions, will be collected. At least 1000 images of normal breasts will also be collected for use as reference data. With these data, we will construct a deep learning-based AI system to detect breast cancer on mammograms. The primary endpoint will be the sensitivity and specificity of the AI system with the test image set.
When the ability of AI reading is shown to be on a par with that of human reading, images of normal breasts or benign lesions that do not have to be read by a human can be selected by AI beforehand. Our AI might work well in Asian women who have similar breast density, size, and shape to those of Japanese women.
UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.</description><subject>Breast Neoplasms - diagnostic imaging</subject><subject>Case-Control Studies</subject><subject>Deep Learning</subject><subject>Female</subject><subject>Humans</subject><subject>Japan</subject><subject>Mammography - methods</subject><subject>Radiographic Image Interpretation, Computer-Assisted - methods</subject><subject>Retrospective Studies</subject><subject>Study Protocol Clinical Trial</subject><issn>0025-7974</issn><issn>1536-5964</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNpdUV1PFDEUbYgGVuAXmJh59GWwn1v7YkJA1ATCC_La3Gnv7FY707Wdkey_p7iAaG_TNueec26TQ8hbRk8YNfrD1fkJ_bt4hfQeWTAllq0yS_mKLCqqWm20PCBvSvlBKROay31yIPiSG834gtyeNh5x00SEPIZx1XZQ0DcwT2mAqb58gNWYyhRcU7ZlwqHpU25chFJCv62KZoBhSKsMm3XlRCwhjeWIvO4hFjx-vA_J94vPN2df28vrL9_OTi9bJxVlLTOCOwqO9Z2TjmLHdN2oFHjlQRhuOsY66Wt1XDHZQe-Z40vnFdKPoMQh-bTz3czdgN7hOGWIdpPDAHlrEwT7b2cMa7tKv60WQislqsH7R4Ocfs1YJjuE4jBGGDHNxXLJGeNCGFmpYkd1OZWSsX8ew6h9SMRendv_E6mqdy9_-Kx5iqAS5I5wl-KEufyM8x1mu0aI0_qPn9KGt7z6UV2PtiKGiXupKZin</recordid><startdate>20200702</startdate><enddate>20200702</enddate><creator>Yamaguchi, Takeshi</creator><creator>Inoue, Kenichi</creator><creator>Tsunoda, Hiroko</creator><creator>Uematsu, Takayoshi</creator><creator>Shinohara, Norimitsu</creator><creator>Mukai, Hirofumi</creator><general>the Author(s). Published by Wolters Kluwer Health, Inc</general><general>Wolters Kluwer Health</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0003-3306-314</orcidid><orcidid>https://orcid.org/0000-0003-3306-314X</orcidid></search><sort><creationdate>20200702</creationdate><title>A deep learning-based automated diagnostic system for classifying mammographic lesions</title><author>Yamaguchi, Takeshi ; Inoue, Kenichi ; Tsunoda, Hiroko ; Uematsu, Takayoshi ; Shinohara, Norimitsu ; Mukai, Hirofumi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c4501-1932c0ac1fbc4c0eb17b17e55ad5da3929b11b4d4d4b2514bafd1c26cd5e08a53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Breast Neoplasms - diagnostic imaging</topic><topic>Case-Control Studies</topic><topic>Deep Learning</topic><topic>Female</topic><topic>Humans</topic><topic>Japan</topic><topic>Mammography - methods</topic><topic>Radiographic Image Interpretation, Computer-Assisted - methods</topic><topic>Retrospective Studies</topic><topic>Study Protocol Clinical Trial</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yamaguchi, Takeshi</creatorcontrib><creatorcontrib>Inoue, Kenichi</creatorcontrib><creatorcontrib>Tsunoda, Hiroko</creatorcontrib><creatorcontrib>Uematsu, Takayoshi</creatorcontrib><creatorcontrib>Shinohara, Norimitsu</creatorcontrib><creatorcontrib>Mukai, Hirofumi</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Medicine (Baltimore)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yamaguchi, Takeshi</au><au>Inoue, Kenichi</au><au>Tsunoda, Hiroko</au><au>Uematsu, Takayoshi</au><au>Shinohara, Norimitsu</au><au>Mukai, Hirofumi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A deep learning-based automated diagnostic system for classifying mammographic lesions</atitle><jtitle>Medicine (Baltimore)</jtitle><addtitle>Medicine (Baltimore)</addtitle><date>2020-07-02</date><risdate>2020</risdate><volume>99</volume><issue>27</issue><spage>e20977</spage><epage>e20977</epage><pages>e20977-e20977</pages><issn>0025-7974</issn><eissn>1536-5964</eissn><abstract>Screening mammography has led to reduced breast cancer-specific mortality and is recommended worldwide. However, the resultant doctors' workload of reading mammographic scans needs to be addressed. Although computer-aided detection (CAD) systems have been developed to support readers, the findings are conflicting regarding whether traditional CAD systems improve reading performance. Rapid progress in the artificial intelligence (AI) field has led to the advent of newer CAD systems using deep learning-based algorithms which have the potential to reach human performance levels. Those systems, however, have been developed using mammography images mainly from women in western countries. Because Asian women characteristically have higher-density breasts, it is uncertain whether those AI systems can apply to Japanese women. In this study, we will construct a deep learning-based CAD system trained using mammography images from a large number of Japanese women with high quality reading.
We will collect digital mammography images taken for screening or diagnostic purposes at multiple institutions in Japan. A total of 15,000 images, consisting of 5000 images with breast cancer and 10,000 images with benign lesions, will be collected. At least 1000 images of normal breasts will also be collected for use as reference data. With these data, we will construct a deep learning-based AI system to detect breast cancer on mammograms. The primary endpoint will be the sensitivity and specificity of the AI system with the test image set.
When the ability of AI reading is shown to be on a par with that of human reading, images of normal breasts or benign lesions that do not have to be read by a human can be selected by AI beforehand. Our AI might work well in Asian women who have similar breast density, size, and shape to those of Japanese women.
UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.</abstract><cop>United States</cop><pub>the Author(s). Published by Wolters Kluwer Health, Inc</pub><pmid>32629712</pmid><doi>10.1097/MD.0000000000020977</doi><orcidid>https://orcid.org/0000-0003-3306-314</orcidid><orcidid>https://orcid.org/0000-0003-3306-314X</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0025-7974 |
ispartof | Medicine (Baltimore), 2020-07, Vol.99 (27), p.e20977-e20977 |
issn | 0025-7974 1536-5964 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_7337553 |
source | MEDLINE; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Wolters Kluwer Open Health; IngentaConnect Free/Open Access Journals; PubMed Central; Alma/SFX Local Collection |
subjects | Breast Neoplasms - diagnostic imaging Case-Control Studies Deep Learning Female Humans Japan Mammography - methods Radiographic Image Interpretation, Computer-Assisted - methods Retrospective Studies Study Protocol Clinical Trial |
title | A deep learning-based automated diagnostic system for classifying mammographic lesions |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T09%3A42%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20deep%20learning-based%20automated%20diagnostic%20system%20for%20classifying%20mammographic%20lesions&rft.jtitle=Medicine%20(Baltimore)&rft.au=Yamaguchi,%20Takeshi&rft.date=2020-07-02&rft.volume=99&rft.issue=27&rft.spage=e20977&rft.epage=e20977&rft.pages=e20977-e20977&rft.issn=0025-7974&rft.eissn=1536-5964&rft_id=info:doi/10.1097/MD.0000000000020977&rft_dat=%3Cproquest_pubme%3E2421123394%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2421123394&rft_id=info:pmid/32629712&rfr_iscdi=true |