Evaluation of Radiograph Accuracy in Skull X-ray Images Using Deep Learning
Purpose: Accurate positioning is essential for radiography, and it is especially important to maintain image reproducibility in follow-up observations. The decision on re-taking radiographs is entrusting to the individual radiological technologist. The evaluation is a visual and qualitative evaluati...
Gespeichert in:
Veröffentlicht in: | Japanese Journal of Radiological Technology 2022/01/20, Vol.78(1), pp.23-32 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng ; jpn |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 32 |
---|---|
container_issue | 1 |
container_start_page | 23 |
container_title | Japanese Journal of Radiological Technology |
container_volume | 78 |
creator | Mitsutake, Hideyoshi Watanabe, Haruyuki Sakaguchi, Aya Uchiyama, Kiyoshi Lee, Yongbum Hayashi, Norio Shimosegawa, Masayuki Ogura, Toshihiro |
description | Purpose: Accurate positioning is essential for radiography, and it is especially important to maintain image reproducibility in follow-up observations. The decision on re-taking radiographs is entrusting to the individual radiological technologist. The evaluation is a visual and qualitative evaluation and there are individual variations in the acceptance criteria. In this study, we propose a method of image evaluation using a deep convolutional neural network (DCNN) for skull X-ray images. Method: The radiographs were obtained from 5 skull phantoms and were classified by simple network and VGG16. The discrimination ability of DCNN was verified by recognizing the X-ray projection angle and the retake of the radiograph. DCNN architectures were used with the different input image sizes and were evaluated by 5-fold cross-validation and leave-one-out cross-validation. Result: Using the 5-fold cross-validation, the classification accuracy was 99.75% for the simple network and 80.00% for the VGG16 in small input image sizes, and when the input image size was general image size, simple network and VGG16 showed 79.58% and 80.00%, respectively. Conclusion: The experimental results showed that the combination between the small input image size, and the shallow DCNN architecture was suitable for the four-category classification in X-ray projection angles. The classification accuracy was up to 99.75%. The proposed method has the potential to automatically recognize the slight projection angles and the re-taking images to the acceptance criteria. It is considered that our proposed method can contribute to feedback for re-taking images and to reduce radiation dose due to individual subjectivity. |
doi_str_mv | 10.6009/jjrt.780104 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2624696485</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2624696485</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2574-46fc946fd790db5f6be5280882434124ff8996de54b1a422a3a02bea63943c9f3</originalsourceid><addsrcrecordid>eNo9kN9LwzAQx4Mobsw9-S4BH6Uzv5olj2NOHQ4EdeBbSNN0a-3amrTC_nszuu3ljrv78D34AHCL0YQjJB-LwrWTqUAYsQswxELgiAlBL8EQUS4jRlE8AGPv8wQFPKwQuwYDGiPGCZZD8Lb402Wn27yuYJ3BD53m9cbpZgtnxnROmz3MK_j505Ul_I6c3sPlTm-sh2ufVxv4ZG0DV1a7Kkw34CrTpbfjYx-B9fPia_4ard5flvPZKjIknrKI8czIUNKpRGkSZzyxMRFICMIow4RlmZCSpzZmCdaMEE01IonVnEpGjczoCNz3uY2rfzvrW1XUnavCS0U4YVxyJuJAPfSUcbX3zmaqcflOu73CSB3cqYM71bsL9N0xs0t2Nj2zJ1MBmPVA4dsg4Axo1-amtKcwhQ-lDz3fzFY7ZSv6D9jQf7c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2624696485</pqid></control><display><type>article</type><title>Evaluation of Radiograph Accuracy in Skull X-ray Images Using Deep Learning</title><source>MEDLINE</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Mitsutake, Hideyoshi ; Watanabe, Haruyuki ; Sakaguchi, Aya ; Uchiyama, Kiyoshi ; Lee, Yongbum ; Hayashi, Norio ; Shimosegawa, Masayuki ; Ogura, Toshihiro</creator><creatorcontrib>Mitsutake, Hideyoshi ; Watanabe, Haruyuki ; Sakaguchi, Aya ; Uchiyama, Kiyoshi ; Lee, Yongbum ; Hayashi, Norio ; Shimosegawa, Masayuki ; Ogura, Toshihiro</creatorcontrib><description>Purpose: Accurate positioning is essential for radiography, and it is especially important to maintain image reproducibility in follow-up observations. The decision on re-taking radiographs is entrusting to the individual radiological technologist. The evaluation is a visual and qualitative evaluation and there are individual variations in the acceptance criteria. In this study, we propose a method of image evaluation using a deep convolutional neural network (DCNN) for skull X-ray images. Method: The radiographs were obtained from 5 skull phantoms and were classified by simple network and VGG16. The discrimination ability of DCNN was verified by recognizing the X-ray projection angle and the retake of the radiograph. DCNN architectures were used with the different input image sizes and were evaluated by 5-fold cross-validation and leave-one-out cross-validation. Result: Using the 5-fold cross-validation, the classification accuracy was 99.75% for the simple network and 80.00% for the VGG16 in small input image sizes, and when the input image size was general image size, simple network and VGG16 showed 79.58% and 80.00%, respectively. Conclusion: The experimental results showed that the combination between the small input image size, and the shallow DCNN architecture was suitable for the four-category classification in X-ray projection angles. The classification accuracy was up to 99.75%. The proposed method has the potential to automatically recognize the slight projection angles and the re-taking images to the acceptance criteria. It is considered that our proposed method can contribute to feedback for re-taking images and to reduce radiation dose due to individual subjectivity.</description><identifier>ISSN: 0369-4305</identifier><identifier>EISSN: 1881-4883</identifier><identifier>DOI: 10.6009/jjrt.780104</identifier><identifier>PMID: 35046219</identifier><language>eng ; jpn</language><publisher>Japan: Japanese Society of Radiological Technology</publisher><subject>Acceptance criteria ; Accuracy ; artificial intelligence (AI) ; Artificial neural networks ; Classification ; deep convolutional neural network (DCNN) ; Deep Learning ; Evaluation ; Forecasting ; Machine learning ; Neural networks ; Radiation ; Radiation dosage ; radiograph accuracy ; Radiographs ; Radiography ; Reproducibility of Results ; Skull ; Skull - diagnostic imaging ; Visual discrimination ; X-ray image ; X-Rays</subject><ispartof>Japanese Journal of Radiological Technology, 2022/01/20, Vol.78(1), pp.23-32</ispartof><rights>2022 Japanese Society of Radiological Technology</rights><rights>Copyright Japan Science and Technology Agency 2022</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2574-46fc946fd790db5f6be5280882434124ff8996de54b1a422a3a02bea63943c9f3</citedby><cites>FETCH-LOGICAL-c2574-46fc946fd790db5f6be5280882434124ff8996de54b1a422a3a02bea63943c9f3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,4010,27900,27901,27902</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35046219$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Mitsutake, Hideyoshi</creatorcontrib><creatorcontrib>Watanabe, Haruyuki</creatorcontrib><creatorcontrib>Sakaguchi, Aya</creatorcontrib><creatorcontrib>Uchiyama, Kiyoshi</creatorcontrib><creatorcontrib>Lee, Yongbum</creatorcontrib><creatorcontrib>Hayashi, Norio</creatorcontrib><creatorcontrib>Shimosegawa, Masayuki</creatorcontrib><creatorcontrib>Ogura, Toshihiro</creatorcontrib><title>Evaluation of Radiograph Accuracy in Skull X-ray Images Using Deep Learning</title><title>Japanese Journal of Radiological Technology</title><addtitle>Jpn. J. Radiol. Technol.</addtitle><description>Purpose: Accurate positioning is essential for radiography, and it is especially important to maintain image reproducibility in follow-up observations. The decision on re-taking radiographs is entrusting to the individual radiological technologist. The evaluation is a visual and qualitative evaluation and there are individual variations in the acceptance criteria. In this study, we propose a method of image evaluation using a deep convolutional neural network (DCNN) for skull X-ray images. Method: The radiographs were obtained from 5 skull phantoms and were classified by simple network and VGG16. The discrimination ability of DCNN was verified by recognizing the X-ray projection angle and the retake of the radiograph. DCNN architectures were used with the different input image sizes and were evaluated by 5-fold cross-validation and leave-one-out cross-validation. Result: Using the 5-fold cross-validation, the classification accuracy was 99.75% for the simple network and 80.00% for the VGG16 in small input image sizes, and when the input image size was general image size, simple network and VGG16 showed 79.58% and 80.00%, respectively. Conclusion: The experimental results showed that the combination between the small input image size, and the shallow DCNN architecture was suitable for the four-category classification in X-ray projection angles. The classification accuracy was up to 99.75%. The proposed method has the potential to automatically recognize the slight projection angles and the re-taking images to the acceptance criteria. It is considered that our proposed method can contribute to feedback for re-taking images and to reduce radiation dose due to individual subjectivity.</description><subject>Acceptance criteria</subject><subject>Accuracy</subject><subject>artificial intelligence (AI)</subject><subject>Artificial neural networks</subject><subject>Classification</subject><subject>deep convolutional neural network (DCNN)</subject><subject>Deep Learning</subject><subject>Evaluation</subject><subject>Forecasting</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Radiation</subject><subject>Radiation dosage</subject><subject>radiograph accuracy</subject><subject>Radiographs</subject><subject>Radiography</subject><subject>Reproducibility of Results</subject><subject>Skull</subject><subject>Skull - diagnostic imaging</subject><subject>Visual discrimination</subject><subject>X-ray image</subject><subject>X-Rays</subject><issn>0369-4305</issn><issn>1881-4883</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNo9kN9LwzAQx4Mobsw9-S4BH6Uzv5olj2NOHQ4EdeBbSNN0a-3amrTC_nszuu3ljrv78D34AHCL0YQjJB-LwrWTqUAYsQswxELgiAlBL8EQUS4jRlE8AGPv8wQFPKwQuwYDGiPGCZZD8Lb402Wn27yuYJ3BD53m9cbpZgtnxnROmz3MK_j505Ul_I6c3sPlTm-sh2ufVxv4ZG0DV1a7Kkw34CrTpbfjYx-B9fPia_4ard5flvPZKjIknrKI8czIUNKpRGkSZzyxMRFICMIow4RlmZCSpzZmCdaMEE01IonVnEpGjczoCNz3uY2rfzvrW1XUnavCS0U4YVxyJuJAPfSUcbX3zmaqcflOu73CSB3cqYM71bsL9N0xs0t2Nj2zJ1MBmPVA4dsg4Axo1-amtKcwhQ-lDz3fzFY7ZSv6D9jQf7c</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Mitsutake, Hideyoshi</creator><creator>Watanabe, Haruyuki</creator><creator>Sakaguchi, Aya</creator><creator>Uchiyama, Kiyoshi</creator><creator>Lee, Yongbum</creator><creator>Hayashi, Norio</creator><creator>Shimosegawa, Masayuki</creator><creator>Ogura, Toshihiro</creator><general>Japanese Society of Radiological Technology</general><general>Japan Science and Technology Agency</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QO</scope><scope>7SC</scope><scope>7U5</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope></search><sort><creationdate>2022</creationdate><title>Evaluation of Radiograph Accuracy in Skull X-ray Images Using Deep Learning</title><author>Mitsutake, Hideyoshi ; Watanabe, Haruyuki ; Sakaguchi, Aya ; Uchiyama, Kiyoshi ; Lee, Yongbum ; Hayashi, Norio ; Shimosegawa, Masayuki ; Ogura, Toshihiro</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2574-46fc946fd790db5f6be5280882434124ff8996de54b1a422a3a02bea63943c9f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng ; jpn</language><creationdate>2022</creationdate><topic>Acceptance criteria</topic><topic>Accuracy</topic><topic>artificial intelligence (AI)</topic><topic>Artificial neural networks</topic><topic>Classification</topic><topic>deep convolutional neural network (DCNN)</topic><topic>Deep Learning</topic><topic>Evaluation</topic><topic>Forecasting</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Radiation</topic><topic>Radiation dosage</topic><topic>radiograph accuracy</topic><topic>Radiographs</topic><topic>Radiography</topic><topic>Reproducibility of Results</topic><topic>Skull</topic><topic>Skull - diagnostic imaging</topic><topic>Visual discrimination</topic><topic>X-ray image</topic><topic>X-Rays</topic><toplevel>online_resources</toplevel><creatorcontrib>Mitsutake, Hideyoshi</creatorcontrib><creatorcontrib>Watanabe, Haruyuki</creatorcontrib><creatorcontrib>Sakaguchi, Aya</creatorcontrib><creatorcontrib>Uchiyama, Kiyoshi</creatorcontrib><creatorcontrib>Lee, Yongbum</creatorcontrib><creatorcontrib>Hayashi, Norio</creatorcontrib><creatorcontrib>Shimosegawa, Masayuki</creatorcontrib><creatorcontrib>Ogura, Toshihiro</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Biotechnology Research Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><jtitle>Japanese Journal of Radiological Technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mitsutake, Hideyoshi</au><au>Watanabe, Haruyuki</au><au>Sakaguchi, Aya</au><au>Uchiyama, Kiyoshi</au><au>Lee, Yongbum</au><au>Hayashi, Norio</au><au>Shimosegawa, Masayuki</au><au>Ogura, Toshihiro</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evaluation of Radiograph Accuracy in Skull X-ray Images Using Deep Learning</atitle><jtitle>Japanese Journal of Radiological Technology</jtitle><addtitle>Jpn. J. Radiol. Technol.</addtitle><date>2022</date><risdate>2022</risdate><volume>78</volume><issue>1</issue><spage>23</spage><epage>32</epage><pages>23-32</pages><artnum>780104</artnum><issn>0369-4305</issn><eissn>1881-4883</eissn><abstract>Purpose: Accurate positioning is essential for radiography, and it is especially important to maintain image reproducibility in follow-up observations. The decision on re-taking radiographs is entrusting to the individual radiological technologist. The evaluation is a visual and qualitative evaluation and there are individual variations in the acceptance criteria. In this study, we propose a method of image evaluation using a deep convolutional neural network (DCNN) for skull X-ray images. Method: The radiographs were obtained from 5 skull phantoms and were classified by simple network and VGG16. The discrimination ability of DCNN was verified by recognizing the X-ray projection angle and the retake of the radiograph. DCNN architectures were used with the different input image sizes and were evaluated by 5-fold cross-validation and leave-one-out cross-validation. Result: Using the 5-fold cross-validation, the classification accuracy was 99.75% for the simple network and 80.00% for the VGG16 in small input image sizes, and when the input image size was general image size, simple network and VGG16 showed 79.58% and 80.00%, respectively. Conclusion: The experimental results showed that the combination between the small input image size, and the shallow DCNN architecture was suitable for the four-category classification in X-ray projection angles. The classification accuracy was up to 99.75%. The proposed method has the potential to automatically recognize the slight projection angles and the re-taking images to the acceptance criteria. It is considered that our proposed method can contribute to feedback for re-taking images and to reduce radiation dose due to individual subjectivity.</abstract><cop>Japan</cop><pub>Japanese Society of Radiological Technology</pub><pmid>35046219</pmid><doi>10.6009/jjrt.780104</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0369-4305 |
ispartof | Japanese Journal of Radiological Technology, 2022/01/20, Vol.78(1), pp.23-32 |
issn | 0369-4305 1881-4883 |
language | eng ; jpn |
recordid | cdi_proquest_journals_2624696485 |
source | MEDLINE; EZB-FREE-00999 freely available EZB journals |
subjects | Acceptance criteria Accuracy artificial intelligence (AI) Artificial neural networks Classification deep convolutional neural network (DCNN) Deep Learning Evaluation Forecasting Machine learning Neural networks Radiation Radiation dosage radiograph accuracy Radiographs Radiography Reproducibility of Results Skull Skull - diagnostic imaging Visual discrimination X-ray image X-Rays |
title | Evaluation of Radiograph Accuracy in Skull X-ray Images Using Deep Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T15%3A07%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evaluation%20of%20Radiograph%20Accuracy%20in%20Skull%20X-ray%20Images%20Using%20Deep%20Learning&rft.jtitle=Japanese%20Journal%20of%20Radiological%20Technology&rft.au=Mitsutake,%20Hideyoshi&rft.date=2022&rft.volume=78&rft.issue=1&rft.spage=23&rft.epage=32&rft.pages=23-32&rft.artnum=780104&rft.issn=0369-4305&rft.eissn=1881-4883&rft_id=info:doi/10.6009/jjrt.780104&rft_dat=%3Cproquest_cross%3E2624696485%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2624696485&rft_id=info:pmid/35046219&rfr_iscdi=true |