Machine Friendly Machine Learning: Interpretation of Computed Tomography Without Image Reconstruction

Recent advancements in deep learning for automated image processing and classification have accelerated many new applications for medical image analysis. However, most deep learning algorithms have been developed using reconstructed, human-interpretable medical images. While image reconstruction fro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific reports 2019-10, Vol.9 (1), p.15540-9, Article 15540
Hauptverfasser: Lee, Hyunkwang, Huang, Chao, Yune, Sehyo, Tajmir, Shahein H., Kim, Myeongchan, Do, Synho
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 9
container_issue 1
container_start_page 15540
container_title Scientific reports
container_volume 9
creator Lee, Hyunkwang
Huang, Chao
Yune, Sehyo
Tajmir, Shahein H.
Kim, Myeongchan
Do, Synho
description Recent advancements in deep learning for automated image processing and classification have accelerated many new applications for medical image analysis. However, most deep learning algorithms have been developed using reconstructed, human-interpretable medical images. While image reconstruction from raw sensor data is required for the creation of medical images, the reconstruction process only uses a partial representation of all the data acquired. Here, we report the development of a system to directly process raw computed tomography (CT) data in sinogram-space, bypassing the intermediary step of image reconstruction. Two classification tasks were evaluated for their feasibility of sinogram-space machine learning: body region identification and intracranial hemorrhage (ICH) detection. Our proposed SinoNet, a convolutional neural network optimized for interpreting sinograms, performed favorably compared to conventional reconstructed image-space-based systems for both tasks, regardless of scanning geometries in terms of projections or detectors. Further, SinoNet performed significantly better when using sparsely sampled sinograms than conventional networks operating in image-space. As a result, sinogram-space algorithms could be used in field settings for triage (presence of ICH), especially where low radiation dose is desired. These findings also demonstrate another strength of deep learning where it can analyze and interpret sinograms that are virtually impossible for human experts.
doi_str_mv 10.1038/s41598-019-51779-5
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_6820559</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310430301</sourcerecordid><originalsourceid>FETCH-LOGICAL-c474t-df3e9fbb53de86aa099f6f8eebd708e78c69a5177a7f8cf5978218e771c294533</originalsourceid><addsrcrecordid>eNp9kV9rFDEUxYMottR-AR8k4IsvY_NnMpn4IMhi7cKKIBUfQzZzMztlJhmTTGG_vdluW6sP5iEJN797bg4HodeUvKeEtxeppkK1FaGqElTKsj9Dp4zUomKcsedP7ifoPKUbUpZgqqbqJTrhtGlqIsUpgq_G7gYP-DIO4Ltxjx8KGzDRD77_gNc-Q5wjZJOH4HFweBWmecnQ4eswhT6aebfHP4e8C0vG68n0gL-DDT7luNhDzyv0wpkxwfn9eYZ-XH6-Xl1Vm29f1qtPm8rWss5V5zgot90K3kHbGEOUco1rAbadJC3I1jbKHNwa6VrrhJIto6UuqS3OBOdn6ONRd162E3QWfI5m1HMcJhP3OphB__3ih53uw61uWkaEUEXg3b1ADL8WSFlPQ7IwjsZDWJJmnJJGFvAw6-0_6E1Yoi_27qiaE05oodiRsjGkFME9foYSfQhSH4PUJUh9F6QWpenNUxuPLQ-xFYAfgVSefA_xz-z_yP4G2Umq6g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2310430301</pqid></control><display><type>article</type><title>Machine Friendly Machine Learning: Interpretation of Computed Tomography Without Image Reconstruction</title><source>DOAJ Directory of Open Access Journals</source><source>Springer Nature OA Free Journals</source><source>Nature Free</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><source>Alma/SFX Local Collection</source><source>Free Full-Text Journals in Chemistry</source><creator>Lee, Hyunkwang ; Huang, Chao ; Yune, Sehyo ; Tajmir, Shahein H. ; Kim, Myeongchan ; Do, Synho</creator><creatorcontrib>Lee, Hyunkwang ; Huang, Chao ; Yune, Sehyo ; Tajmir, Shahein H. ; Kim, Myeongchan ; Do, Synho</creatorcontrib><description>Recent advancements in deep learning for automated image processing and classification have accelerated many new applications for medical image analysis. However, most deep learning algorithms have been developed using reconstructed, human-interpretable medical images. While image reconstruction from raw sensor data is required for the creation of medical images, the reconstruction process only uses a partial representation of all the data acquired. Here, we report the development of a system to directly process raw computed tomography (CT) data in sinogram-space, bypassing the intermediary step of image reconstruction. Two classification tasks were evaluated for their feasibility of sinogram-space machine learning: body region identification and intracranial hemorrhage (ICH) detection. Our proposed SinoNet, a convolutional neural network optimized for interpreting sinograms, performed favorably compared to conventional reconstructed image-space-based systems for both tasks, regardless of scanning geometries in terms of projections or detectors. Further, SinoNet performed significantly better when using sparsely sampled sinograms than conventional networks operating in image-space. As a result, sinogram-space algorithms could be used in field settings for triage (presence of ICH), especially where low radiation dose is desired. These findings also demonstrate another strength of deep learning where it can analyze and interpret sinograms that are virtually impossible for human experts.</description><identifier>ISSN: 2045-2322</identifier><identifier>EISSN: 2045-2322</identifier><identifier>DOI: 10.1038/s41598-019-51779-5</identifier><identifier>PMID: 31664075</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>639/166/987 ; 692/700/1421/2109 ; 692/700/1421/65 ; Algorithms ; Artificial intelligence ; Classification ; Computed tomography ; Deep learning ; Hemorrhage ; Humanities and Social Sciences ; Image processing ; Learning algorithms ; Machine learning ; multidisciplinary ; Neural networks ; Science ; Science (multidisciplinary)</subject><ispartof>Scientific reports, 2019-10, Vol.9 (1), p.15540-9, Article 15540</ispartof><rights>The Author(s) 2019</rights><rights>2019. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c474t-df3e9fbb53de86aa099f6f8eebd708e78c69a5177a7f8cf5978218e771c294533</citedby><cites>FETCH-LOGICAL-c474t-df3e9fbb53de86aa099f6f8eebd708e78c69a5177a7f8cf5978218e771c294533</cites><orcidid>0000-0003-3940-9850 ; 0000-0001-6211-7050 ; 0000-0002-4806-6709 ; 0000-0002-9223-3586</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6820559/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6820559/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,860,881,27903,27904,41099,42168,51555,53770,53772</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/31664075$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Lee, Hyunkwang</creatorcontrib><creatorcontrib>Huang, Chao</creatorcontrib><creatorcontrib>Yune, Sehyo</creatorcontrib><creatorcontrib>Tajmir, Shahein H.</creatorcontrib><creatorcontrib>Kim, Myeongchan</creatorcontrib><creatorcontrib>Do, Synho</creatorcontrib><title>Machine Friendly Machine Learning: Interpretation of Computed Tomography Without Image Reconstruction</title><title>Scientific reports</title><addtitle>Sci Rep</addtitle><addtitle>Sci Rep</addtitle><description>Recent advancements in deep learning for automated image processing and classification have accelerated many new applications for medical image analysis. However, most deep learning algorithms have been developed using reconstructed, human-interpretable medical images. While image reconstruction from raw sensor data is required for the creation of medical images, the reconstruction process only uses a partial representation of all the data acquired. Here, we report the development of a system to directly process raw computed tomography (CT) data in sinogram-space, bypassing the intermediary step of image reconstruction. Two classification tasks were evaluated for their feasibility of sinogram-space machine learning: body region identification and intracranial hemorrhage (ICH) detection. Our proposed SinoNet, a convolutional neural network optimized for interpreting sinograms, performed favorably compared to conventional reconstructed image-space-based systems for both tasks, regardless of scanning geometries in terms of projections or detectors. Further, SinoNet performed significantly better when using sparsely sampled sinograms than conventional networks operating in image-space. As a result, sinogram-space algorithms could be used in field settings for triage (presence of ICH), especially where low radiation dose is desired. These findings also demonstrate another strength of deep learning where it can analyze and interpret sinograms that are virtually impossible for human experts.</description><subject>639/166/987</subject><subject>692/700/1421/2109</subject><subject>692/700/1421/65</subject><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Classification</subject><subject>Computed tomography</subject><subject>Deep learning</subject><subject>Hemorrhage</subject><subject>Humanities and Social Sciences</subject><subject>Image processing</subject><subject>Learning algorithms</subject><subject>Machine learning</subject><subject>multidisciplinary</subject><subject>Neural networks</subject><subject>Science</subject><subject>Science (multidisciplinary)</subject><issn>2045-2322</issn><issn>2045-2322</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kV9rFDEUxYMottR-AR8k4IsvY_NnMpn4IMhi7cKKIBUfQzZzMztlJhmTTGG_vdluW6sP5iEJN797bg4HodeUvKeEtxeppkK1FaGqElTKsj9Dp4zUomKcsedP7ifoPKUbUpZgqqbqJTrhtGlqIsUpgq_G7gYP-DIO4Ltxjx8KGzDRD77_gNc-Q5wjZJOH4HFweBWmecnQ4eswhT6aebfHP4e8C0vG68n0gL-DDT7luNhDzyv0wpkxwfn9eYZ-XH6-Xl1Vm29f1qtPm8rWss5V5zgot90K3kHbGEOUco1rAbadJC3I1jbKHNwa6VrrhJIto6UuqS3OBOdn6ONRd162E3QWfI5m1HMcJhP3OphB__3ih53uw61uWkaEUEXg3b1ADL8WSFlPQ7IwjsZDWJJmnJJGFvAw6-0_6E1Yoi_27qiaE05oodiRsjGkFME9foYSfQhSH4PUJUh9F6QWpenNUxuPLQ-xFYAfgVSefA_xz-z_yP4G2Umq6g</recordid><startdate>20191029</startdate><enddate>20191029</enddate><creator>Lee, Hyunkwang</creator><creator>Huang, Chao</creator><creator>Yune, Sehyo</creator><creator>Tajmir, Shahein H.</creator><creator>Kim, Myeongchan</creator><creator>Do, Synho</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><scope>C6C</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88A</scope><scope>88E</scope><scope>88I</scope><scope>8FE</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>LK8</scope><scope>M0S</scope><scope>M1P</scope><scope>M2P</scope><scope>M7P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0003-3940-9850</orcidid><orcidid>https://orcid.org/0000-0001-6211-7050</orcidid><orcidid>https://orcid.org/0000-0002-4806-6709</orcidid><orcidid>https://orcid.org/0000-0002-9223-3586</orcidid></search><sort><creationdate>20191029</creationdate><title>Machine Friendly Machine Learning: Interpretation of Computed Tomography Without Image Reconstruction</title><author>Lee, Hyunkwang ; Huang, Chao ; Yune, Sehyo ; Tajmir, Shahein H. ; Kim, Myeongchan ; Do, Synho</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c474t-df3e9fbb53de86aa099f6f8eebd708e78c69a5177a7f8cf5978218e771c294533</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>639/166/987</topic><topic>692/700/1421/2109</topic><topic>692/700/1421/65</topic><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Classification</topic><topic>Computed tomography</topic><topic>Deep learning</topic><topic>Hemorrhage</topic><topic>Humanities and Social Sciences</topic><topic>Image processing</topic><topic>Learning algorithms</topic><topic>Machine learning</topic><topic>multidisciplinary</topic><topic>Neural networks</topic><topic>Science</topic><topic>Science (multidisciplinary)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lee, Hyunkwang</creatorcontrib><creatorcontrib>Huang, Chao</creatorcontrib><creatorcontrib>Yune, Sehyo</creatorcontrib><creatorcontrib>Tajmir, Shahein H.</creatorcontrib><creatorcontrib>Kim, Myeongchan</creatorcontrib><creatorcontrib>Do, Synho</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Biology Database (Alumni Edition)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Science Database</collection><collection>Biological Science Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Scientific reports</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lee, Hyunkwang</au><au>Huang, Chao</au><au>Yune, Sehyo</au><au>Tajmir, Shahein H.</au><au>Kim, Myeongchan</au><au>Do, Synho</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Machine Friendly Machine Learning: Interpretation of Computed Tomography Without Image Reconstruction</atitle><jtitle>Scientific reports</jtitle><stitle>Sci Rep</stitle><addtitle>Sci Rep</addtitle><date>2019-10-29</date><risdate>2019</risdate><volume>9</volume><issue>1</issue><spage>15540</spage><epage>9</epage><pages>15540-9</pages><artnum>15540</artnum><issn>2045-2322</issn><eissn>2045-2322</eissn><abstract>Recent advancements in deep learning for automated image processing and classification have accelerated many new applications for medical image analysis. However, most deep learning algorithms have been developed using reconstructed, human-interpretable medical images. While image reconstruction from raw sensor data is required for the creation of medical images, the reconstruction process only uses a partial representation of all the data acquired. Here, we report the development of a system to directly process raw computed tomography (CT) data in sinogram-space, bypassing the intermediary step of image reconstruction. Two classification tasks were evaluated for their feasibility of sinogram-space machine learning: body region identification and intracranial hemorrhage (ICH) detection. Our proposed SinoNet, a convolutional neural network optimized for interpreting sinograms, performed favorably compared to conventional reconstructed image-space-based systems for both tasks, regardless of scanning geometries in terms of projections or detectors. Further, SinoNet performed significantly better when using sparsely sampled sinograms than conventional networks operating in image-space. As a result, sinogram-space algorithms could be used in field settings for triage (presence of ICH), especially where low radiation dose is desired. These findings also demonstrate another strength of deep learning where it can analyze and interpret sinograms that are virtually impossible for human experts.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><pmid>31664075</pmid><doi>10.1038/s41598-019-51779-5</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0003-3940-9850</orcidid><orcidid>https://orcid.org/0000-0001-6211-7050</orcidid><orcidid>https://orcid.org/0000-0002-4806-6709</orcidid><orcidid>https://orcid.org/0000-0002-9223-3586</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2045-2322
ispartof Scientific reports, 2019-10, Vol.9 (1), p.15540-9, Article 15540
issn 2045-2322
2045-2322
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_6820559
source DOAJ Directory of Open Access Journals; Springer Nature OA Free Journals; Nature Free; EZB-FREE-00999 freely available EZB journals; PubMed Central; Alma/SFX Local Collection; Free Full-Text Journals in Chemistry
subjects 639/166/987
692/700/1421/2109
692/700/1421/65
Algorithms
Artificial intelligence
Classification
Computed tomography
Deep learning
Hemorrhage
Humanities and Social Sciences
Image processing
Learning algorithms
Machine learning
multidisciplinary
Neural networks
Science
Science (multidisciplinary)
title Machine Friendly Machine Learning: Interpretation of Computed Tomography Without Image Reconstruction
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T12%3A44%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Machine%20Friendly%20Machine%20Learning:%20Interpretation%20of%20Computed%20Tomography%20Without%20Image%20Reconstruction&rft.jtitle=Scientific%20reports&rft.au=Lee,%20Hyunkwang&rft.date=2019-10-29&rft.volume=9&rft.issue=1&rft.spage=15540&rft.epage=9&rft.pages=15540-9&rft.artnum=15540&rft.issn=2045-2322&rft.eissn=2045-2322&rft_id=info:doi/10.1038/s41598-019-51779-5&rft_dat=%3Cproquest_pubme%3E2310430301%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2310430301&rft_id=info:pmid/31664075&rfr_iscdi=true