Automatic 3D landmarking model using patch‐based deep neural networks for CT image of oral and maxillofacial surgery
Background Manual landmarking is a time consuming and highly professional work. Although some algorithm‐based landmarking methods have been proposed, they lack flexibility and may be susceptible to data diversity. Methods The CT images from 66 patients who underwent oral and maxillofacial surgery (O...
Gespeichert in:
Veröffentlicht in: | The international journal of medical robotics + computer assisted surgery 2020-06, Vol.16 (3), p.e2093-n/a |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | n/a |
---|---|
container_issue | 3 |
container_start_page | e2093 |
container_title | The international journal of medical robotics + computer assisted surgery |
container_volume | 16 |
creator | Ma, Qingchuan Kobayashi, Etsuko Fan, Bowen Nakagawa, Keiichi Sakuma, Ichiro Masamune, Ken Suenaga, Hideyuki |
description | Background
Manual landmarking is a time consuming and highly professional work. Although some algorithm‐based landmarking methods have been proposed, they lack flexibility and may be susceptible to data diversity.
Methods
The CT images from 66 patients who underwent oral and maxillofacial surgery (OMS) were landmarked manually in MIMICS. Then the CT slices were exported as images for recreating the 3D volume. The coordinate data of landmarks were further processed in Matlab using a principal component analysis (PCA) method. A patch‐based deep neural network model with a three‐layer convolutional neural network (CNN) was trained to obtain landmarks from CT images.
Results
The evaluating experiment showed that this CNN model could automatically finish landmarking in an average processing time of 37.871 seconds with an average accuracy of 5.785 mm.
Conclusion
This study shows a promising potential to relieve the workload of the surgeon and reduces the dependence on human experience for OMS landmarking. |
doi_str_mv | 10.1002/rcs.2093 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2356595823</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2399079823</sourcerecordid><originalsourceid>FETCH-LOGICAL-c4153-c68f31840c34b5045110c7eef90b1e9a2fe8126e9a49d7953bdc3542b7107b493</originalsourceid><addsrcrecordid>eNp1kd9KwzAUh4MoTqfgE0jAG286kyZpm8sx_8JA0AnelTQ9nd3aZSatc3c-gs_ok5i5uQvBq3NIPj7OOT-ETijpUULCC6tdLySS7aADyuMkEDJ63t32gnbQoXMTQrjgEd9HHRaSSMQ0OUBv_bYxtWpKjdklrtQsr5WdlrMxrk0OFW7dqp-rRr98fXxmykGOc4A5nkFrVeVLszB26nBhLB6McFmrMWBTYLP69Tpcq_eyqkyhdOlfXGvHYJdHaK9QlYPjTe2ip-ur0eA2GN7f3A36w0BzKligo6RgNOFEM54JPz6lRMcAhSQZBanCAhIaRr7jMo-lYFmumeBhFlMSZ1yyLjpfe-fWvLbgmrQunYbKLwqmdWnIROQPlITMo2d_0Ilp7cxP5ykpSSzX1EaorXHOQpHOrd_ZLlNK0lUWqc8iXWXh0dONsM1qyLfg7_E9EKyBRVnB8l9R-jB4_BF-A8Btkzg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2399079823</pqid></control><display><type>article</type><title>Automatic 3D landmarking model using patch‐based deep neural networks for CT image of oral and maxillofacial surgery</title><source>Wiley Online Library Journals Frontfile Complete</source><creator>Ma, Qingchuan ; Kobayashi, Etsuko ; Fan, Bowen ; Nakagawa, Keiichi ; Sakuma, Ichiro ; Masamune, Ken ; Suenaga, Hideyuki</creator><creatorcontrib>Ma, Qingchuan ; Kobayashi, Etsuko ; Fan, Bowen ; Nakagawa, Keiichi ; Sakuma, Ichiro ; Masamune, Ken ; Suenaga, Hideyuki</creatorcontrib><description>Background
Manual landmarking is a time consuming and highly professional work. Although some algorithm‐based landmarking methods have been proposed, they lack flexibility and may be susceptible to data diversity.
Methods
The CT images from 66 patients who underwent oral and maxillofacial surgery (OMS) were landmarked manually in MIMICS. Then the CT slices were exported as images for recreating the 3D volume. The coordinate data of landmarks were further processed in Matlab using a principal component analysis (PCA) method. A patch‐based deep neural network model with a three‐layer convolutional neural network (CNN) was trained to obtain landmarks from CT images.
Results
The evaluating experiment showed that this CNN model could automatically finish landmarking in an average processing time of 37.871 seconds with an average accuracy of 5.785 mm.
Conclusion
This study shows a promising potential to relieve the workload of the surgeon and reduces the dependence on human experience for OMS landmarking.</description><identifier>ISSN: 1478-5951</identifier><identifier>EISSN: 1478-596X</identifier><identifier>DOI: 10.1002/rcs.2093</identifier><identifier>PMID: 32065718</identifier><language>eng</language><publisher>Chichester, UK: John Wiley & Sons, Inc</publisher><subject>3D cephalometry ; Algorithms ; Artificial neural networks ; automatic landmarking ; Computed tomography ; convolutional neural network ; machine learning ; Maxillofacial surgery ; Medical imaging ; Neural networks ; oral and maxillofacial surgery ; Principal components analysis ; Surgery ; Three dimensional models ; Workload</subject><ispartof>The international journal of medical robotics + computer assisted surgery, 2020-06, Vol.16 (3), p.e2093-n/a</ispartof><rights>2020 John Wiley & Sons, Ltd</rights><rights>2020 John Wiley & Sons, Ltd.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c4153-c68f31840c34b5045110c7eef90b1e9a2fe8126e9a49d7953bdc3542b7107b493</citedby><cites>FETCH-LOGICAL-c4153-c68f31840c34b5045110c7eef90b1e9a2fe8126e9a49d7953bdc3542b7107b493</cites><orcidid>0000-0001-8061-6320</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Frcs.2093$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Frcs.2093$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,45550,45551</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32065718$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ma, Qingchuan</creatorcontrib><creatorcontrib>Kobayashi, Etsuko</creatorcontrib><creatorcontrib>Fan, Bowen</creatorcontrib><creatorcontrib>Nakagawa, Keiichi</creatorcontrib><creatorcontrib>Sakuma, Ichiro</creatorcontrib><creatorcontrib>Masamune, Ken</creatorcontrib><creatorcontrib>Suenaga, Hideyuki</creatorcontrib><title>Automatic 3D landmarking model using patch‐based deep neural networks for CT image of oral and maxillofacial surgery</title><title>The international journal of medical robotics + computer assisted surgery</title><addtitle>Int J Med Robot</addtitle><description>Background
Manual landmarking is a time consuming and highly professional work. Although some algorithm‐based landmarking methods have been proposed, they lack flexibility and may be susceptible to data diversity.
Methods
The CT images from 66 patients who underwent oral and maxillofacial surgery (OMS) were landmarked manually in MIMICS. Then the CT slices were exported as images for recreating the 3D volume. The coordinate data of landmarks were further processed in Matlab using a principal component analysis (PCA) method. A patch‐based deep neural network model with a three‐layer convolutional neural network (CNN) was trained to obtain landmarks from CT images.
Results
The evaluating experiment showed that this CNN model could automatically finish landmarking in an average processing time of 37.871 seconds with an average accuracy of 5.785 mm.
Conclusion
This study shows a promising potential to relieve the workload of the surgeon and reduces the dependence on human experience for OMS landmarking.</description><subject>3D cephalometry</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>automatic landmarking</subject><subject>Computed tomography</subject><subject>convolutional neural network</subject><subject>machine learning</subject><subject>Maxillofacial surgery</subject><subject>Medical imaging</subject><subject>Neural networks</subject><subject>oral and maxillofacial surgery</subject><subject>Principal components analysis</subject><subject>Surgery</subject><subject>Three dimensional models</subject><subject>Workload</subject><issn>1478-5951</issn><issn>1478-596X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp1kd9KwzAUh4MoTqfgE0jAG286kyZpm8sx_8JA0AnelTQ9nd3aZSatc3c-gs_ok5i5uQvBq3NIPj7OOT-ETijpUULCC6tdLySS7aADyuMkEDJ63t32gnbQoXMTQrjgEd9HHRaSSMQ0OUBv_bYxtWpKjdklrtQsr5WdlrMxrk0OFW7dqp-rRr98fXxmykGOc4A5nkFrVeVLszB26nBhLB6McFmrMWBTYLP69Tpcq_eyqkyhdOlfXGvHYJdHaK9QlYPjTe2ip-ur0eA2GN7f3A36w0BzKligo6RgNOFEM54JPz6lRMcAhSQZBanCAhIaRr7jMo-lYFmumeBhFlMSZ1yyLjpfe-fWvLbgmrQunYbKLwqmdWnIROQPlITMo2d_0Ilp7cxP5ykpSSzX1EaorXHOQpHOrd_ZLlNK0lUWqc8iXWXh0dONsM1qyLfg7_E9EKyBRVnB8l9R-jB4_BF-A8Btkzg</recordid><startdate>202006</startdate><enddate>202006</enddate><creator>Ma, Qingchuan</creator><creator>Kobayashi, Etsuko</creator><creator>Fan, Bowen</creator><creator>Nakagawa, Keiichi</creator><creator>Sakuma, Ichiro</creator><creator>Masamune, Ken</creator><creator>Suenaga, Hideyuki</creator><general>John Wiley & Sons, Inc</general><general>Wiley Subscription Services, Inc</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>JQ2</scope><scope>K9.</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-8061-6320</orcidid></search><sort><creationdate>202006</creationdate><title>Automatic 3D landmarking model using patch‐based deep neural networks for CT image of oral and maxillofacial surgery</title><author>Ma, Qingchuan ; Kobayashi, Etsuko ; Fan, Bowen ; Nakagawa, Keiichi ; Sakuma, Ichiro ; Masamune, Ken ; Suenaga, Hideyuki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c4153-c68f31840c34b5045110c7eef90b1e9a2fe8126e9a49d7953bdc3542b7107b493</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>3D cephalometry</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>automatic landmarking</topic><topic>Computed tomography</topic><topic>convolutional neural network</topic><topic>machine learning</topic><topic>Maxillofacial surgery</topic><topic>Medical imaging</topic><topic>Neural networks</topic><topic>oral and maxillofacial surgery</topic><topic>Principal components analysis</topic><topic>Surgery</topic><topic>Three dimensional models</topic><topic>Workload</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ma, Qingchuan</creatorcontrib><creatorcontrib>Kobayashi, Etsuko</creatorcontrib><creatorcontrib>Fan, Bowen</creatorcontrib><creatorcontrib>Nakagawa, Keiichi</creatorcontrib><creatorcontrib>Sakuma, Ichiro</creatorcontrib><creatorcontrib>Masamune, Ken</creatorcontrib><creatorcontrib>Suenaga, Hideyuki</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>The international journal of medical robotics + computer assisted surgery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ma, Qingchuan</au><au>Kobayashi, Etsuko</au><au>Fan, Bowen</au><au>Nakagawa, Keiichi</au><au>Sakuma, Ichiro</au><au>Masamune, Ken</au><au>Suenaga, Hideyuki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Automatic 3D landmarking model using patch‐based deep neural networks for CT image of oral and maxillofacial surgery</atitle><jtitle>The international journal of medical robotics + computer assisted surgery</jtitle><addtitle>Int J Med Robot</addtitle><date>2020-06</date><risdate>2020</risdate><volume>16</volume><issue>3</issue><spage>e2093</spage><epage>n/a</epage><pages>e2093-n/a</pages><issn>1478-5951</issn><eissn>1478-596X</eissn><abstract>Background
Manual landmarking is a time consuming and highly professional work. Although some algorithm‐based landmarking methods have been proposed, they lack flexibility and may be susceptible to data diversity.
Methods
The CT images from 66 patients who underwent oral and maxillofacial surgery (OMS) were landmarked manually in MIMICS. Then the CT slices were exported as images for recreating the 3D volume. The coordinate data of landmarks were further processed in Matlab using a principal component analysis (PCA) method. A patch‐based deep neural network model with a three‐layer convolutional neural network (CNN) was trained to obtain landmarks from CT images.
Results
The evaluating experiment showed that this CNN model could automatically finish landmarking in an average processing time of 37.871 seconds with an average accuracy of 5.785 mm.
Conclusion
This study shows a promising potential to relieve the workload of the surgeon and reduces the dependence on human experience for OMS landmarking.</abstract><cop>Chichester, UK</cop><pub>John Wiley & Sons, Inc</pub><pmid>32065718</pmid><doi>10.1002/rcs.2093</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0001-8061-6320</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1478-5951 |
ispartof | The international journal of medical robotics + computer assisted surgery, 2020-06, Vol.16 (3), p.e2093-n/a |
issn | 1478-5951 1478-596X |
language | eng |
recordid | cdi_proquest_miscellaneous_2356595823 |
source | Wiley Online Library Journals Frontfile Complete |
subjects | 3D cephalometry Algorithms Artificial neural networks automatic landmarking Computed tomography convolutional neural network machine learning Maxillofacial surgery Medical imaging Neural networks oral and maxillofacial surgery Principal components analysis Surgery Three dimensional models Workload |
title | Automatic 3D landmarking model using patch‐based deep neural networks for CT image of oral and maxillofacial surgery |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T04%3A41%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Automatic%203D%20landmarking%20model%20using%20patch%E2%80%90based%20deep%20neural%20networks%20for%20CT%20image%20of%20oral%20and%20maxillofacial%20surgery&rft.jtitle=The%20international%20journal%20of%20medical%20robotics%20+%20computer%20assisted%20surgery&rft.au=Ma,%20Qingchuan&rft.date=2020-06&rft.volume=16&rft.issue=3&rft.spage=e2093&rft.epage=n/a&rft.pages=e2093-n/a&rft.issn=1478-5951&rft.eissn=1478-596X&rft_id=info:doi/10.1002/rcs.2093&rft_dat=%3Cproquest_cross%3E2399079823%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2399079823&rft_id=info:pmid/32065718&rfr_iscdi=true |