Dual-stream encoded fusion saliency detection based on RGB and grayscale images
Existing saliency algorithms based on deep learning are not sufficient to extract features of images. And the features are fused only during decoding. As a result, the edge of saliency detection result is not clear and the internal structure display is not uniform. To solve the above problems, this...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2023-12, Vol.82 (30), p.47327-47346 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 47346 |
---|---|
container_issue | 30 |
container_start_page | 47327 |
container_title | Multimedia tools and applications |
container_volume | 82 |
creator | Xu, Tao Zhao, Weishuo Chai, Haojie Cai, Lei |
description | Existing saliency algorithms based on deep learning are not sufficient to extract features of images. And the features are fused only during decoding. As a result, the edge of saliency detection result is not clear and the internal structure display is not uniform. To solve the above problems, this paper proposes a saliency detection method of dual-stream encoding fusion based on RGB and grayscale image. Firstly, an interactive dual-stream encoder is constructed to extract the feature information of gray stream and RGB stream. Secondly, a multi-level fusion strategy is used to obtain more effective multi-scale features. These features are extended and optimized in the decoding stage by linear transformation with hybrid attention. Finally, We propose a hybrid weighted loss function. So that the prediction results of the model can keep a high level accuracy at pixel level and region level. The experimental results of the model proposed to this paper on 6 public datasets illustrate that: The prediction results of the proposed method are clearer about the edge of salient targets and more uniform within salient targets. And has a more lightweight model size. |
doi_str_mv | 10.1007/s11042-023-15217-z |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2895065852</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2895065852</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-a394517472b523d6b84fd49a1f263d3ebc348d37c1c7b02f0eafc049c682db853</originalsourceid><addsrcrecordid>eNp9UE1LAzEUDKJgrf4BTwueoy9fm-xRq61CoSB6Dtl8lJbtbk12D-2vN3UFb57eMG9m3mMQuiVwTwDkQyIEOMVAGSaCEomPZ2hChGRYSkrOM2YKsBRALtFVSlsAUgrKJ2j1PJgGpz56syt8azvnXRGGtOnaIplmk6lD4XzvbX-iapPyPoP3xVNhWlesozkkaxpfbHZm7dM1ugimSf7md07R5_zlY_aKl6vF2-xxiS2V0GPDKi6I5JLWgjJX1ooHxytDAi2ZY762jCvHpCVW1kADeBMs8MqWirpaCTZFd2PuPnZfg0-93nZDbPNJTVUloBQqB08RHVU2dilFH_Q-5j_jQRPQp-L0WJzOxemf4vQxm9hoSlncrn38i_7H9Q0_8XCt</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2895065852</pqid></control><display><type>article</type><title>Dual-stream encoded fusion saliency detection based on RGB and grayscale images</title><source>SpringerLink Journals - AutoHoldings</source><creator>Xu, Tao ; Zhao, Weishuo ; Chai, Haojie ; Cai, Lei</creator><creatorcontrib>Xu, Tao ; Zhao, Weishuo ; Chai, Haojie ; Cai, Lei</creatorcontrib><description>Existing saliency algorithms based on deep learning are not sufficient to extract features of images. And the features are fused only during decoding. As a result, the edge of saliency detection result is not clear and the internal structure display is not uniform. To solve the above problems, this paper proposes a saliency detection method of dual-stream encoding fusion based on RGB and grayscale image. Firstly, an interactive dual-stream encoder is constructed to extract the feature information of gray stream and RGB stream. Secondly, a multi-level fusion strategy is used to obtain more effective multi-scale features. These features are extended and optimized in the decoding stage by linear transformation with hybrid attention. Finally, We propose a hybrid weighted loss function. So that the prediction results of the model can keep a high level accuracy at pixel level and region level. The experimental results of the model proposed to this paper on 6 public datasets illustrate that: The prediction results of the proposed method are clearer about the edge of salient targets and more uniform within salient targets. And has a more lightweight model size.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-023-15217-z</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Algorithms ; Computer Communication Networks ; Computer Science ; Data Structures and Information Theory ; Gray scale ; Image retrieval ; Linear transformations ; Methods ; Multimedia ; Multimedia Information Systems ; Neural networks ; Salience ; Semantics ; Special Purpose and Application-Based Systems</subject><ispartof>Multimedia tools and applications, 2023-12, Vol.82 (30), p.47327-47346</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-a394517472b523d6b84fd49a1f263d3ebc348d37c1c7b02f0eafc049c682db853</cites><orcidid>0000-0002-8821-4550</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-023-15217-z$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-023-15217-z$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Xu, Tao</creatorcontrib><creatorcontrib>Zhao, Weishuo</creatorcontrib><creatorcontrib>Chai, Haojie</creatorcontrib><creatorcontrib>Cai, Lei</creatorcontrib><title>Dual-stream encoded fusion saliency detection based on RGB and grayscale images</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Existing saliency algorithms based on deep learning are not sufficient to extract features of images. And the features are fused only during decoding. As a result, the edge of saliency detection result is not clear and the internal structure display is not uniform. To solve the above problems, this paper proposes a saliency detection method of dual-stream encoding fusion based on RGB and grayscale image. Firstly, an interactive dual-stream encoder is constructed to extract the feature information of gray stream and RGB stream. Secondly, a multi-level fusion strategy is used to obtain more effective multi-scale features. These features are extended and optimized in the decoding stage by linear transformation with hybrid attention. Finally, We propose a hybrid weighted loss function. So that the prediction results of the model can keep a high level accuracy at pixel level and region level. The experimental results of the model proposed to this paper on 6 public datasets illustrate that: The prediction results of the proposed method are clearer about the edge of salient targets and more uniform within salient targets. And has a more lightweight model size.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Gray scale</subject><subject>Image retrieval</subject><subject>Linear transformations</subject><subject>Methods</subject><subject>Multimedia</subject><subject>Multimedia Information Systems</subject><subject>Neural networks</subject><subject>Salience</subject><subject>Semantics</subject><subject>Special Purpose and Application-Based Systems</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9UE1LAzEUDKJgrf4BTwueoy9fm-xRq61CoSB6Dtl8lJbtbk12D-2vN3UFb57eMG9m3mMQuiVwTwDkQyIEOMVAGSaCEomPZ2hChGRYSkrOM2YKsBRALtFVSlsAUgrKJ2j1PJgGpz56syt8azvnXRGGtOnaIplmk6lD4XzvbX-iapPyPoP3xVNhWlesozkkaxpfbHZm7dM1ugimSf7md07R5_zlY_aKl6vF2-xxiS2V0GPDKi6I5JLWgjJX1ooHxytDAi2ZY762jCvHpCVW1kADeBMs8MqWirpaCTZFd2PuPnZfg0-93nZDbPNJTVUloBQqB08RHVU2dilFH_Q-5j_jQRPQp-L0WJzOxemf4vQxm9hoSlncrn38i_7H9Q0_8XCt</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Xu, Tao</creator><creator>Zhao, Weishuo</creator><creator>Chai, Haojie</creator><creator>Cai, Lei</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-8821-4550</orcidid></search><sort><creationdate>20231201</creationdate><title>Dual-stream encoded fusion saliency detection based on RGB and grayscale images</title><author>Xu, Tao ; Zhao, Weishuo ; Chai, Haojie ; Cai, Lei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-a394517472b523d6b84fd49a1f263d3ebc348d37c1c7b02f0eafc049c682db853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Gray scale</topic><topic>Image retrieval</topic><topic>Linear transformations</topic><topic>Methods</topic><topic>Multimedia</topic><topic>Multimedia Information Systems</topic><topic>Neural networks</topic><topic>Salience</topic><topic>Semantics</topic><topic>Special Purpose and Application-Based Systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xu, Tao</creatorcontrib><creatorcontrib>Zhao, Weishuo</creatorcontrib><creatorcontrib>Chai, Haojie</creatorcontrib><creatorcontrib>Cai, Lei</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xu, Tao</au><au>Zhao, Weishuo</au><au>Chai, Haojie</au><au>Cai, Lei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dual-stream encoded fusion saliency detection based on RGB and grayscale images</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>82</volume><issue>30</issue><spage>47327</spage><epage>47346</epage><pages>47327-47346</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Existing saliency algorithms based on deep learning are not sufficient to extract features of images. And the features are fused only during decoding. As a result, the edge of saliency detection result is not clear and the internal structure display is not uniform. To solve the above problems, this paper proposes a saliency detection method of dual-stream encoding fusion based on RGB and grayscale image. Firstly, an interactive dual-stream encoder is constructed to extract the feature information of gray stream and RGB stream. Secondly, a multi-level fusion strategy is used to obtain more effective multi-scale features. These features are extended and optimized in the decoding stage by linear transformation with hybrid attention. Finally, We propose a hybrid weighted loss function. So that the prediction results of the model can keep a high level accuracy at pixel level and region level. The experimental results of the model proposed to this paper on 6 public datasets illustrate that: The prediction results of the proposed method are clearer about the edge of salient targets and more uniform within salient targets. And has a more lightweight model size.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-023-15217-z</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0002-8821-4550</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1380-7501 |
ispartof | Multimedia tools and applications, 2023-12, Vol.82 (30), p.47327-47346 |
issn | 1380-7501 1573-7721 |
language | eng |
recordid | cdi_proquest_journals_2895065852 |
source | SpringerLink Journals - AutoHoldings |
subjects | Accuracy Algorithms Computer Communication Networks Computer Science Data Structures and Information Theory Gray scale Image retrieval Linear transformations Methods Multimedia Multimedia Information Systems Neural networks Salience Semantics Special Purpose and Application-Based Systems |
title | Dual-stream encoded fusion saliency detection based on RGB and grayscale images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T05%3A42%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dual-stream%20encoded%20fusion%20saliency%20detection%20based%20on%20RGB%20and%20grayscale%20images&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Xu,%20Tao&rft.date=2023-12-01&rft.volume=82&rft.issue=30&rft.spage=47327&rft.epage=47346&rft.pages=47327-47346&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-023-15217-z&rft_dat=%3Cproquest_cross%3E2895065852%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2895065852&rft_id=info:pmid/&rfr_iscdi=true |