Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion
For the image fusion method using sparse representation, the adaptive dictionary and fusion rule have a great influence on the multi-modality image fusion, and the maximum L 1 norm fusion rule may cause gray inconsistency in the fusion result. In order to solve this problem, we proposed an improved...
Gespeichert in:
Veröffentlicht in: | Machine vision and applications 2022-09, Vol.33 (5), Article 69 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 5 |
container_start_page | |
container_title | Machine vision and applications |
container_volume | 33 |
creator | Wang, Chang Wu, Yang Yu, Yi Zhao, Jun Qiang |
description | For the image fusion method using sparse representation, the adaptive dictionary and fusion rule have a great influence on the multi-modality image fusion, and the maximum
L
1
norm fusion rule may cause gray inconsistency in the fusion result. In order to solve this problem, we proposed an improved multi-modality image fusion method by combining the joint patch clustering-based adaptive dictionary and sparse representation in this study. First, we used a Gaussian filter to separate the high- and low-frequency information. Second, we adopted the local energy-weighted strategy to complete the low-frequency fusion. Third, we used the joint patch clustering algorithm to reconstruct an over-complete adaptive learning dictionary, designed a hybrid fusion rule depending on the similarity of multi-norm of sparse representation coefficients, and completed the high-frequency fusion. Last, we obtained the fusion result by transforming the frequency domain into the spatial domain. We adopted the fusion metrics to evaluate the fusion results quantitatively and proved the superiority of the proposed method by comparing the state-of-the-art image fusion methods. The results showed that this method has the highest fusion metrics in average gradient, general image quality, and edge preservation. The results also showed that this method has the best performance in subjective vision. We demonstrated that this method has strong robustness by analyzing the parameter’s influence on the fusion result and consuming time. We extended this method to the infrared and visible image fusion and multi-focus image fusion perfectly. In summary, this method has the advantages of good robustness and wide application. |
doi_str_mv | 10.1007/s00138-022-01322-w |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2692476889</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2692476889</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-16f1c2bc1488eb80470041f86eab33cda408a0d55621be08996f8c76fccee7323</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWD_-gKeA52g-ttnkKMVPCl70HLLZSU1pd9cka-m_N3UFb15mBt55Z14ehK4YvWGU1reJUiYUoZyTMpS6O0IzVglOWC31MZpRXWZFNT9FZymtKaVVXVcztH7pQ5fxYLP7wG4zpgwxdCvS2AQttq0dcvgC3AaXQ9_ZuMe2a3EabEyAIwwREnTZHkTs-4i34yYHsu1buwl5j8PWrgD7MRX9Ap14u0lw-dvP0fvD_dviiSxfH58Xd0viBNOZMOmZ441jlVLQqJKzZGVeSbCNEK61FVWWtvO55KwBqrSWXrlaeucAasHFObqe7g6x_xwhZbPux9iVl4ZLzataKqXLFp-2XOxTiuDNEEvauDeMmgNTMzE1han5YWp2xSQmUxoOlCD-nf7H9Q3xT3xl</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2692476889</pqid></control><display><type>article</type><title>Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion</title><source>SpringerNature Journals</source><creator>Wang, Chang ; Wu, Yang ; Yu, Yi ; Zhao, Jun Qiang</creator><creatorcontrib>Wang, Chang ; Wu, Yang ; Yu, Yi ; Zhao, Jun Qiang</creatorcontrib><description>For the image fusion method using sparse representation, the adaptive dictionary and fusion rule have a great influence on the multi-modality image fusion, and the maximum
L
1
norm fusion rule may cause gray inconsistency in the fusion result. In order to solve this problem, we proposed an improved multi-modality image fusion method by combining the joint patch clustering-based adaptive dictionary and sparse representation in this study. First, we used a Gaussian filter to separate the high- and low-frequency information. Second, we adopted the local energy-weighted strategy to complete the low-frequency fusion. Third, we used the joint patch clustering algorithm to reconstruct an over-complete adaptive learning dictionary, designed a hybrid fusion rule depending on the similarity of multi-norm of sparse representation coefficients, and completed the high-frequency fusion. Last, we obtained the fusion result by transforming the frequency domain into the spatial domain. We adopted the fusion metrics to evaluate the fusion results quantitatively and proved the superiority of the proposed method by comparing the state-of-the-art image fusion methods. The results showed that this method has the highest fusion metrics in average gradient, general image quality, and edge preservation. The results also showed that this method has the best performance in subjective vision. We demonstrated that this method has strong robustness by analyzing the parameter’s influence on the fusion result and consuming time. We extended this method to the infrared and visible image fusion and multi-focus image fusion perfectly. In summary, this method has the advantages of good robustness and wide application.</description><identifier>ISSN: 0932-8092</identifier><identifier>EISSN: 1432-1769</identifier><identifier>DOI: 10.1007/s00138-022-01322-w</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Algorithms ; Clustering ; Communications Engineering ; Computer Science ; Computer vision ; Dictionaries ; Image processing ; Image Processing and Computer Vision ; Image quality ; Infrared imagery ; Machine learning ; Networks ; Original Paper ; Pattern Recognition ; Representations ; Robustness ; Vision systems</subject><ispartof>Machine vision and applications, 2022-09, Vol.33 (5), Article 69</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-16f1c2bc1488eb80470041f86eab33cda408a0d55621be08996f8c76fccee7323</citedby><cites>FETCH-LOGICAL-c319t-16f1c2bc1488eb80470041f86eab33cda408a0d55621be08996f8c76fccee7323</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00138-022-01322-w$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00138-022-01322-w$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>315,781,785,27928,27929,41492,42561,51323</link.rule.ids></links><search><creatorcontrib>Wang, Chang</creatorcontrib><creatorcontrib>Wu, Yang</creatorcontrib><creatorcontrib>Yu, Yi</creatorcontrib><creatorcontrib>Zhao, Jun Qiang</creatorcontrib><title>Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion</title><title>Machine vision and applications</title><addtitle>Machine Vision and Applications</addtitle><description>For the image fusion method using sparse representation, the adaptive dictionary and fusion rule have a great influence on the multi-modality image fusion, and the maximum
L
1
norm fusion rule may cause gray inconsistency in the fusion result. In order to solve this problem, we proposed an improved multi-modality image fusion method by combining the joint patch clustering-based adaptive dictionary and sparse representation in this study. First, we used a Gaussian filter to separate the high- and low-frequency information. Second, we adopted the local energy-weighted strategy to complete the low-frequency fusion. Third, we used the joint patch clustering algorithm to reconstruct an over-complete adaptive learning dictionary, designed a hybrid fusion rule depending on the similarity of multi-norm of sparse representation coefficients, and completed the high-frequency fusion. Last, we obtained the fusion result by transforming the frequency domain into the spatial domain. We adopted the fusion metrics to evaluate the fusion results quantitatively and proved the superiority of the proposed method by comparing the state-of-the-art image fusion methods. The results showed that this method has the highest fusion metrics in average gradient, general image quality, and edge preservation. The results also showed that this method has the best performance in subjective vision. We demonstrated that this method has strong robustness by analyzing the parameter’s influence on the fusion result and consuming time. We extended this method to the infrared and visible image fusion and multi-focus image fusion perfectly. In summary, this method has the advantages of good robustness and wide application.</description><subject>Algorithms</subject><subject>Clustering</subject><subject>Communications Engineering</subject><subject>Computer Science</subject><subject>Computer vision</subject><subject>Dictionaries</subject><subject>Image processing</subject><subject>Image Processing and Computer Vision</subject><subject>Image quality</subject><subject>Infrared imagery</subject><subject>Machine learning</subject><subject>Networks</subject><subject>Original Paper</subject><subject>Pattern Recognition</subject><subject>Representations</subject><subject>Robustness</subject><subject>Vision systems</subject><issn>0932-8092</issn><issn>1432-1769</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp9kE1LAzEQhoMoWD_-gKeA52g-ttnkKMVPCl70HLLZSU1pd9cka-m_N3UFb15mBt55Z14ehK4YvWGU1reJUiYUoZyTMpS6O0IzVglOWC31MZpRXWZFNT9FZymtKaVVXVcztH7pQ5fxYLP7wG4zpgwxdCvS2AQttq0dcvgC3AaXQ9_ZuMe2a3EabEyAIwwREnTZHkTs-4i34yYHsu1buwl5j8PWrgD7MRX9Ap14u0lw-dvP0fvD_dviiSxfH58Xd0viBNOZMOmZ441jlVLQqJKzZGVeSbCNEK61FVWWtvO55KwBqrSWXrlaeucAasHFObqe7g6x_xwhZbPux9iVl4ZLzataKqXLFp-2XOxTiuDNEEvauDeMmgNTMzE1han5YWp2xSQmUxoOlCD-nf7H9Q3xT3xl</recordid><startdate>20220901</startdate><enddate>20220901</enddate><creator>Wang, Chang</creator><creator>Wu, Yang</creator><creator>Yu, Yi</creator><creator>Zhao, Jun Qiang</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220901</creationdate><title>Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion</title><author>Wang, Chang ; Wu, Yang ; Yu, Yi ; Zhao, Jun Qiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-16f1c2bc1488eb80470041f86eab33cda408a0d55621be08996f8c76fccee7323</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Clustering</topic><topic>Communications Engineering</topic><topic>Computer Science</topic><topic>Computer vision</topic><topic>Dictionaries</topic><topic>Image processing</topic><topic>Image Processing and Computer Vision</topic><topic>Image quality</topic><topic>Infrared imagery</topic><topic>Machine learning</topic><topic>Networks</topic><topic>Original Paper</topic><topic>Pattern Recognition</topic><topic>Representations</topic><topic>Robustness</topic><topic>Vision systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Chang</creatorcontrib><creatorcontrib>Wu, Yang</creatorcontrib><creatorcontrib>Yu, Yi</creatorcontrib><creatorcontrib>Zhao, Jun Qiang</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>Machine vision and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Chang</au><au>Wu, Yang</au><au>Yu, Yi</au><au>Zhao, Jun Qiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion</atitle><jtitle>Machine vision and applications</jtitle><stitle>Machine Vision and Applications</stitle><date>2022-09-01</date><risdate>2022</risdate><volume>33</volume><issue>5</issue><artnum>69</artnum><issn>0932-8092</issn><eissn>1432-1769</eissn><abstract>For the image fusion method using sparse representation, the adaptive dictionary and fusion rule have a great influence on the multi-modality image fusion, and the maximum
L
1
norm fusion rule may cause gray inconsistency in the fusion result. In order to solve this problem, we proposed an improved multi-modality image fusion method by combining the joint patch clustering-based adaptive dictionary and sparse representation in this study. First, we used a Gaussian filter to separate the high- and low-frequency information. Second, we adopted the local energy-weighted strategy to complete the low-frequency fusion. Third, we used the joint patch clustering algorithm to reconstruct an over-complete adaptive learning dictionary, designed a hybrid fusion rule depending on the similarity of multi-norm of sparse representation coefficients, and completed the high-frequency fusion. Last, we obtained the fusion result by transforming the frequency domain into the spatial domain. We adopted the fusion metrics to evaluate the fusion results quantitatively and proved the superiority of the proposed method by comparing the state-of-the-art image fusion methods. The results showed that this method has the highest fusion metrics in average gradient, general image quality, and edge preservation. The results also showed that this method has the best performance in subjective vision. We demonstrated that this method has strong robustness by analyzing the parameter’s influence on the fusion result and consuming time. We extended this method to the infrared and visible image fusion and multi-focus image fusion perfectly. In summary, this method has the advantages of good robustness and wide application.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00138-022-01322-w</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0932-8092 |
ispartof | Machine vision and applications, 2022-09, Vol.33 (5), Article 69 |
issn | 0932-8092 1432-1769 |
language | eng |
recordid | cdi_proquest_journals_2692476889 |
source | SpringerNature Journals |
subjects | Algorithms Clustering Communications Engineering Computer Science Computer vision Dictionaries Image processing Image Processing and Computer Vision Image quality Infrared imagery Machine learning Networks Original Paper Pattern Recognition Representations Robustness Vision systems |
title | Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T17%3A44%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Joint%20patch%20clustering-based%20adaptive%20dictionary%20and%20sparse%20representation%20for%20multi-modality%20image%20fusion&rft.jtitle=Machine%20vision%20and%20applications&rft.au=Wang,%20Chang&rft.date=2022-09-01&rft.volume=33&rft.issue=5&rft.artnum=69&rft.issn=0932-8092&rft.eissn=1432-1769&rft_id=info:doi/10.1007/s00138-022-01322-w&rft_dat=%3Cproquest_cross%3E2692476889%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2692476889&rft_id=info:pmid/&rfr_iscdi=true |