Adopting Quaternion Wavelet Transform to Fuse Multi-Modal Medical Images
Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In this paper, we propose a novel multi-modal medical image fusion method based on simplified pulse-coupled neural network and...
Gespeichert in:
Veröffentlicht in: | Journal of medical and biological engineering 2017, Vol.37 (2), p.230-239 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 239 |
---|---|
container_issue | 2 |
container_start_page | 230 |
container_title | Journal of medical and biological engineering |
container_volume | 37 |
creator | Geng, Peng Sun, Xiuming Liu, Jianhua |
description | Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In this paper, we propose a novel multi-modal medical image fusion method based on simplified pulse-coupled neural network and quaternion wavelet transform. The proposed fusion algorithm is capable of combining not only pairs of computed tomography (CT) and magnetic resonance (MR) images, but also pairs of CT and proton-density-weighted MR images, and multi-spectral MR images such as T1 and T2. Experiments on six pairs of multi-modal medical images are conducted to compare the proposed scheme with four existing methods. The performances of various methods are investigated using mutual information metrics and comprehensive fusion performance characterization (total fusion performance, fusion loss, and modified fusion artifacts criteria). The experimental results show that the proposed algorithm not only extracts more important visual information from source images, but also effectively avoids introducing artificial information into fused medical images. It significantly outperforms existing medical image fusion methods in terms of subjective performance and objective evaluation metrics. |
doi_str_mv | 10.1007/s40846-016-0200-6 |
format | Article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_5928192</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2038701428</sourcerecordid><originalsourceid>FETCH-LOGICAL-c470t-f51ae1250aafcd24af513da62eee8849d0203e453ea3055e962f8bcb644108073</originalsourceid><addsrcrecordid>eNp1kU9LxDAQxYMouqgfwIsUvHipTtIkTS6CiP_ARQTFY8i207XSJmvSCn57s-wqKhgIA5nfvMnjEXJA4YQClKeRg-IyB5ouA8jlBpkwqnXOS1FukgmVoHPQSuyQ_RhfIZ1CS0nVNtlhuhSigHJCbs5rvxhaN88eRjtgcK132bN9xw6H7DFYFxsf-mzw2dUYMZuO3dDmU1_bLpti3Vap3vZ2jnGPbDW2i7i_rrvk6ery8eImv7u_vr04v8srXsKQN4JapEyAtU1VM27TQ1FbyRBRKa7r5KVALgq0BQiBWrJGzaqZ5JyCgrLYJWcr3cU467Gu0A3BdmYR2t6GD-Nta353XPti5v7dCM0U1SwJHK8Fgn8bMQ6mb2OFXWcd-jGatF-VQDlTCT36g776Mbhkz1ClBFeKKZ0ouqKq4GMM2Hx_hoJZRmVWUZkUlVlGZWSaOfzp4nviK5gEsBUQU8vNMfxY_a_qJ4fHnhE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1885488289</pqid></control><display><type>article</type><title>Adopting Quaternion Wavelet Transform to Fuse Multi-Modal Medical Images</title><source>SpringerLink Journals</source><creator>Geng, Peng ; Sun, Xiuming ; Liu, Jianhua</creator><creatorcontrib>Geng, Peng ; Sun, Xiuming ; Liu, Jianhua</creatorcontrib><description>Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In this paper, we propose a novel multi-modal medical image fusion method based on simplified pulse-coupled neural network and quaternion wavelet transform. The proposed fusion algorithm is capable of combining not only pairs of computed tomography (CT) and magnetic resonance (MR) images, but also pairs of CT and proton-density-weighted MR images, and multi-spectral MR images such as T1 and T2. Experiments on six pairs of multi-modal medical images are conducted to compare the proposed scheme with four existing methods. The performances of various methods are investigated using mutual information metrics and comprehensive fusion performance characterization (total fusion performance, fusion loss, and modified fusion artifacts criteria). The experimental results show that the proposed algorithm not only extracts more important visual information from source images, but also effectively avoids introducing artificial information into fused medical images. It significantly outperforms existing medical image fusion methods in terms of subjective performance and objective evaluation metrics.</description><identifier>ISSN: 1609-0985</identifier><identifier>EISSN: 2199-4757</identifier><identifier>DOI: 10.1007/s40846-016-0200-6</identifier><identifier>PMID: 29755307</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Biomedical Engineering and Bioengineering ; Cell Biology ; Computed tomography ; Defense industry ; Engineering ; Imaging ; Magnetic resonance imaging ; Neural networks ; Original ; Original Article ; Quaternions ; Radiation therapy ; Radiology ; Surgery ; Therapeutic applications ; Wavelet transforms</subject><ispartof>Journal of medical and biological engineering, 2017, Vol.37 (2), p.230-239</ispartof><rights>Taiwanese Society of Biomedical Engineering 2017</rights><rights>Taiwanese Society of Biomedical Engineering 2017.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c470t-f51ae1250aafcd24af513da62eee8849d0203e453ea3055e962f8bcb644108073</citedby><cites>FETCH-LOGICAL-c470t-f51ae1250aafcd24af513da62eee8849d0203e453ea3055e962f8bcb644108073</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s40846-016-0200-6$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s40846-016-0200-6$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>230,314,776,780,881,27901,27902,41464,42533,51294</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/29755307$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Geng, Peng</creatorcontrib><creatorcontrib>Sun, Xiuming</creatorcontrib><creatorcontrib>Liu, Jianhua</creatorcontrib><title>Adopting Quaternion Wavelet Transform to Fuse Multi-Modal Medical Images</title><title>Journal of medical and biological engineering</title><addtitle>J. Med. Biol. Eng</addtitle><addtitle>J Med Biol Eng</addtitle><description>Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In this paper, we propose a novel multi-modal medical image fusion method based on simplified pulse-coupled neural network and quaternion wavelet transform. The proposed fusion algorithm is capable of combining not only pairs of computed tomography (CT) and magnetic resonance (MR) images, but also pairs of CT and proton-density-weighted MR images, and multi-spectral MR images such as T1 and T2. Experiments on six pairs of multi-modal medical images are conducted to compare the proposed scheme with four existing methods. The performances of various methods are investigated using mutual information metrics and comprehensive fusion performance characterization (total fusion performance, fusion loss, and modified fusion artifacts criteria). The experimental results show that the proposed algorithm not only extracts more important visual information from source images, but also effectively avoids introducing artificial information into fused medical images. It significantly outperforms existing medical image fusion methods in terms of subjective performance and objective evaluation metrics.</description><subject>Biomedical Engineering and Bioengineering</subject><subject>Cell Biology</subject><subject>Computed tomography</subject><subject>Defense industry</subject><subject>Engineering</subject><subject>Imaging</subject><subject>Magnetic resonance imaging</subject><subject>Neural networks</subject><subject>Original</subject><subject>Original Article</subject><subject>Quaternions</subject><subject>Radiation therapy</subject><subject>Radiology</subject><subject>Surgery</subject><subject>Therapeutic applications</subject><subject>Wavelet transforms</subject><issn>1609-0985</issn><issn>2199-4757</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><recordid>eNp1kU9LxDAQxYMouqgfwIsUvHipTtIkTS6CiP_ARQTFY8i207XSJmvSCn57s-wqKhgIA5nfvMnjEXJA4YQClKeRg-IyB5ouA8jlBpkwqnXOS1FukgmVoHPQSuyQ_RhfIZ1CS0nVNtlhuhSigHJCbs5rvxhaN88eRjtgcK132bN9xw6H7DFYFxsf-mzw2dUYMZuO3dDmU1_bLpti3Vap3vZ2jnGPbDW2i7i_rrvk6ery8eImv7u_vr04v8srXsKQN4JapEyAtU1VM27TQ1FbyRBRKa7r5KVALgq0BQiBWrJGzaqZ5JyCgrLYJWcr3cU467Gu0A3BdmYR2t6GD-Nta353XPti5v7dCM0U1SwJHK8Fgn8bMQ6mb2OFXWcd-jGatF-VQDlTCT36g776Mbhkz1ClBFeKKZ0ouqKq4GMM2Hx_hoJZRmVWUZkUlVlGZWSaOfzp4nviK5gEsBUQU8vNMfxY_a_qJ4fHnhE</recordid><startdate>2017</startdate><enddate>2017</enddate><creator>Geng, Peng</creator><creator>Sun, Xiuming</creator><creator>Liu, Jianhua</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>K9.</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>2017</creationdate><title>Adopting Quaternion Wavelet Transform to Fuse Multi-Modal Medical Images</title><author>Geng, Peng ; Sun, Xiuming ; Liu, Jianhua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c470t-f51ae1250aafcd24af513da62eee8849d0203e453ea3055e962f8bcb644108073</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Biomedical Engineering and Bioengineering</topic><topic>Cell Biology</topic><topic>Computed tomography</topic><topic>Defense industry</topic><topic>Engineering</topic><topic>Imaging</topic><topic>Magnetic resonance imaging</topic><topic>Neural networks</topic><topic>Original</topic><topic>Original Article</topic><topic>Quaternions</topic><topic>Radiation therapy</topic><topic>Radiology</topic><topic>Surgery</topic><topic>Therapeutic applications</topic><topic>Wavelet transforms</topic><toplevel>online_resources</toplevel><creatorcontrib>Geng, Peng</creatorcontrib><creatorcontrib>Sun, Xiuming</creatorcontrib><creatorcontrib>Liu, Jianhua</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Journal of medical and biological engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Geng, Peng</au><au>Sun, Xiuming</au><au>Liu, Jianhua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adopting Quaternion Wavelet Transform to Fuse Multi-Modal Medical Images</atitle><jtitle>Journal of medical and biological engineering</jtitle><stitle>J. Med. Biol. Eng</stitle><addtitle>J Med Biol Eng</addtitle><date>2017</date><risdate>2017</risdate><volume>37</volume><issue>2</issue><spage>230</spage><epage>239</epage><pages>230-239</pages><issn>1609-0985</issn><eissn>2199-4757</eissn><abstract>Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In this paper, we propose a novel multi-modal medical image fusion method based on simplified pulse-coupled neural network and quaternion wavelet transform. The proposed fusion algorithm is capable of combining not only pairs of computed tomography (CT) and magnetic resonance (MR) images, but also pairs of CT and proton-density-weighted MR images, and multi-spectral MR images such as T1 and T2. Experiments on six pairs of multi-modal medical images are conducted to compare the proposed scheme with four existing methods. The performances of various methods are investigated using mutual information metrics and comprehensive fusion performance characterization (total fusion performance, fusion loss, and modified fusion artifacts criteria). The experimental results show that the proposed algorithm not only extracts more important visual information from source images, but also effectively avoids introducing artificial information into fused medical images. It significantly outperforms existing medical image fusion methods in terms of subjective performance and objective evaluation metrics.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><pmid>29755307</pmid><doi>10.1007/s40846-016-0200-6</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1609-0985 |
ispartof | Journal of medical and biological engineering, 2017, Vol.37 (2), p.230-239 |
issn | 1609-0985 2199-4757 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_5928192 |
source | SpringerLink Journals |
subjects | Biomedical Engineering and Bioengineering Cell Biology Computed tomography Defense industry Engineering Imaging Magnetic resonance imaging Neural networks Original Original Article Quaternions Radiation therapy Radiology Surgery Therapeutic applications Wavelet transforms |
title | Adopting Quaternion Wavelet Transform to Fuse Multi-Modal Medical Images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T00%3A43%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adopting%20Quaternion%20Wavelet%20Transform%20to%20Fuse%20Multi-Modal%20Medical%20Images&rft.jtitle=Journal%20of%20medical%20and%20biological%20engineering&rft.au=Geng,%20Peng&rft.date=2017&rft.volume=37&rft.issue=2&rft.spage=230&rft.epage=239&rft.pages=230-239&rft.issn=1609-0985&rft.eissn=2199-4757&rft_id=info:doi/10.1007/s40846-016-0200-6&rft_dat=%3Cproquest_pubme%3E2038701428%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1885488289&rft_id=info:pmid/29755307&rfr_iscdi=true |