A benchmark dataset and baseline model for co-salient object detection within RGB-D images
Within-image co-salient object detection (wCoSOD) identifies the common and salient objects within an image, which can benefit for many applications, such as reducing information redundancy, animation synthesis, and so on. Besides, the introduction of depth information that conforms to the stereo pe...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2022-10, Vol.81 (25), p.35831-35842 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 35842 |
---|---|
container_issue | 25 |
container_start_page | 35831 |
container_title | Multimedia tools and applications |
container_volume | 81 |
creator | Yang, Ning Zhang, Chen Zhang, Yumo Yang, Haowei Du, Ling |
description | Within-image co-salient object detection (wCoSOD) identifies the common and salient objects within an image, which can benefit for many applications, such as reducing information redundancy, animation synthesis, and so on. Besides, the introduction of depth information that conforms to the stereo perception of human is also more conducive to accurately detecting salient objects. Thus, in this paper, we focus on a new task from the perspective of the benchmark dataset and baseline model, i.e., within-image co-salient object detection in RGB-D images. To bridge the gap the new task and algorithm verification, we first collect a new dataset containing 240 RGB-D images and the corresponding pixel-wise ground truth. Then, we propose an unsupervised method for within-image co-salient object detection in RGB-D images. Under the constraint of depth information, our model decomposes the within-image co-salient object detection task into two parts: determining the salient object proposals; combining the similarity constraint and cluster-based constraint between different proposals to locate the co-salient object and generate the final result. The experimental results on the collected dataset demonstrate that our method achieves competitive performance both qualitatively and quantitatively. |
doi_str_mv | 10.1007/s11042-021-11555-y |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2717355593</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2717355593</sourcerecordid><originalsourceid>FETCH-LOGICAL-c249t-8f52ceffcf628dd72658a756c1d83c2aee4125b3340d2ca45c164d32bc7b52203</originalsourceid><addsrcrecordid>eNp9kEFPAyEUhInRxFr9A55IPKPwWJbtsVatJk1MjF68EBbYdmsLFTCm_150Tbx5mjnMzHv5EDpn9JJRKq8SY7QCQoERxoQQZH-ARkxITqQEdlg8byiRgrJjdJLSmlJWC6hG6HWKW-fNaqvjG7Y66-Qy1t7itrhN7x3eBus2uAsRm0CS3vTOZxzatTMZW5eL9MHjzz6veo-f5tfkBvdbvXTpFB11epPc2a-O0cvd7fPsniwe5w-z6YIYqCaZNJ0A47rOdDU01kqoRaOlqA2zDTegnasYiJbzilowuhKG1ZXl0BrZCgDKx-hi2N3F8P7hUlbr8BF9OalAMskLjgkvKRhSJoaUouvULpY_414xqr4ZqoGhKgzVD0O1LyU-lFIJ-6WLf9P_tL4AuoJ0XA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2717355593</pqid></control><display><type>article</type><title>A benchmark dataset and baseline model for co-salient object detection within RGB-D images</title><source>SpringerLink Journals</source><creator>Yang, Ning ; Zhang, Chen ; Zhang, Yumo ; Yang, Haowei ; Du, Ling</creator><creatorcontrib>Yang, Ning ; Zhang, Chen ; Zhang, Yumo ; Yang, Haowei ; Du, Ling</creatorcontrib><description>Within-image co-salient object detection (wCoSOD) identifies the common and salient objects within an image, which can benefit for many applications, such as reducing information redundancy, animation synthesis, and so on. Besides, the introduction of depth information that conforms to the stereo perception of human is also more conducive to accurately detecting salient objects. Thus, in this paper, we focus on a new task from the perspective of the benchmark dataset and baseline model, i.e., within-image co-salient object detection in RGB-D images. To bridge the gap the new task and algorithm verification, we first collect a new dataset containing 240 RGB-D images and the corresponding pixel-wise ground truth. Then, we propose an unsupervised method for within-image co-salient object detection in RGB-D images. Under the constraint of depth information, our model decomposes the within-image co-salient object detection task into two parts: determining the salient object proposals; combining the similarity constraint and cluster-based constraint between different proposals to locate the co-salient object and generate the final result. The experimental results on the collected dataset demonstrate that our method achieves competitive performance both qualitatively and quantitatively.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-021-11555-y</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>1190: Depth-Related Processing and Applications in Visual Systems ; Algorithms ; Animation ; Benchmarks ; Computer Communication Networks ; Computer Science ; Constraint modelling ; Data Structures and Information Theory ; Datasets ; Multimedia Information Systems ; Object recognition ; Proposals ; Redundancy ; Salience ; Special Purpose and Application-Based Systems</subject><ispartof>Multimedia tools and applications, 2022-10, Vol.81 (25), p.35831-35842</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c249t-8f52ceffcf628dd72658a756c1d83c2aee4125b3340d2ca45c164d32bc7b52203</citedby><cites>FETCH-LOGICAL-c249t-8f52ceffcf628dd72658a756c1d83c2aee4125b3340d2ca45c164d32bc7b52203</cites><orcidid>0000-0003-1709-3816</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-021-11555-y$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-021-11555-y$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Yang, Ning</creatorcontrib><creatorcontrib>Zhang, Chen</creatorcontrib><creatorcontrib>Zhang, Yumo</creatorcontrib><creatorcontrib>Yang, Haowei</creatorcontrib><creatorcontrib>Du, Ling</creatorcontrib><title>A benchmark dataset and baseline model for co-salient object detection within RGB-D images</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Within-image co-salient object detection (wCoSOD) identifies the common and salient objects within an image, which can benefit for many applications, such as reducing information redundancy, animation synthesis, and so on. Besides, the introduction of depth information that conforms to the stereo perception of human is also more conducive to accurately detecting salient objects. Thus, in this paper, we focus on a new task from the perspective of the benchmark dataset and baseline model, i.e., within-image co-salient object detection in RGB-D images. To bridge the gap the new task and algorithm verification, we first collect a new dataset containing 240 RGB-D images and the corresponding pixel-wise ground truth. Then, we propose an unsupervised method for within-image co-salient object detection in RGB-D images. Under the constraint of depth information, our model decomposes the within-image co-salient object detection task into two parts: determining the salient object proposals; combining the similarity constraint and cluster-based constraint between different proposals to locate the co-salient object and generate the final result. The experimental results on the collected dataset demonstrate that our method achieves competitive performance both qualitatively and quantitatively.</description><subject>1190: Depth-Related Processing and Applications in Visual Systems</subject><subject>Algorithms</subject><subject>Animation</subject><subject>Benchmarks</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Constraint modelling</subject><subject>Data Structures and Information Theory</subject><subject>Datasets</subject><subject>Multimedia Information Systems</subject><subject>Object recognition</subject><subject>Proposals</subject><subject>Redundancy</subject><subject>Salience</subject><subject>Special Purpose and Application-Based Systems</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kEFPAyEUhInRxFr9A55IPKPwWJbtsVatJk1MjF68EBbYdmsLFTCm_150Tbx5mjnMzHv5EDpn9JJRKq8SY7QCQoERxoQQZH-ARkxITqQEdlg8byiRgrJjdJLSmlJWC6hG6HWKW-fNaqvjG7Y66-Qy1t7itrhN7x3eBus2uAsRm0CS3vTOZxzatTMZW5eL9MHjzz6veo-f5tfkBvdbvXTpFB11epPc2a-O0cvd7fPsniwe5w-z6YIYqCaZNJ0A47rOdDU01kqoRaOlqA2zDTegnasYiJbzilowuhKG1ZXl0BrZCgDKx-hi2N3F8P7hUlbr8BF9OalAMskLjgkvKRhSJoaUouvULpY_414xqr4ZqoGhKgzVD0O1LyU-lFIJ-6WLf9P_tL4AuoJ0XA</recordid><startdate>20221001</startdate><enddate>20221001</enddate><creator>Yang, Ning</creator><creator>Zhang, Chen</creator><creator>Zhang, Yumo</creator><creator>Yang, Haowei</creator><creator>Du, Ling</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0003-1709-3816</orcidid></search><sort><creationdate>20221001</creationdate><title>A benchmark dataset and baseline model for co-salient object detection within RGB-D images</title><author>Yang, Ning ; Zhang, Chen ; Zhang, Yumo ; Yang, Haowei ; Du, Ling</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c249t-8f52ceffcf628dd72658a756c1d83c2aee4125b3340d2ca45c164d32bc7b52203</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>1190: Depth-Related Processing and Applications in Visual Systems</topic><topic>Algorithms</topic><topic>Animation</topic><topic>Benchmarks</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Constraint modelling</topic><topic>Data Structures and Information Theory</topic><topic>Datasets</topic><topic>Multimedia Information Systems</topic><topic>Object recognition</topic><topic>Proposals</topic><topic>Redundancy</topic><topic>Salience</topic><topic>Special Purpose and Application-Based Systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Ning</creatorcontrib><creatorcontrib>Zhang, Chen</creatorcontrib><creatorcontrib>Zhang, Yumo</creatorcontrib><creatorcontrib>Yang, Haowei</creatorcontrib><creatorcontrib>Du, Ling</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Ning</au><au>Zhang, Chen</au><au>Zhang, Yumo</au><au>Yang, Haowei</au><au>Du, Ling</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A benchmark dataset and baseline model for co-salient object detection within RGB-D images</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2022-10-01</date><risdate>2022</risdate><volume>81</volume><issue>25</issue><spage>35831</spage><epage>35842</epage><pages>35831-35842</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Within-image co-salient object detection (wCoSOD) identifies the common and salient objects within an image, which can benefit for many applications, such as reducing information redundancy, animation synthesis, and so on. Besides, the introduction of depth information that conforms to the stereo perception of human is also more conducive to accurately detecting salient objects. Thus, in this paper, we focus on a new task from the perspective of the benchmark dataset and baseline model, i.e., within-image co-salient object detection in RGB-D images. To bridge the gap the new task and algorithm verification, we first collect a new dataset containing 240 RGB-D images and the corresponding pixel-wise ground truth. Then, we propose an unsupervised method for within-image co-salient object detection in RGB-D images. Under the constraint of depth information, our model decomposes the within-image co-salient object detection task into two parts: determining the salient object proposals; combining the similarity constraint and cluster-based constraint between different proposals to locate the co-salient object and generate the final result. The experimental results on the collected dataset demonstrate that our method achieves competitive performance both qualitatively and quantitatively.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-021-11555-y</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-1709-3816</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1380-7501 |
ispartof | Multimedia tools and applications, 2022-10, Vol.81 (25), p.35831-35842 |
issn | 1380-7501 1573-7721 |
language | eng |
recordid | cdi_proquest_journals_2717355593 |
source | SpringerLink Journals |
subjects | 1190: Depth-Related Processing and Applications in Visual Systems Algorithms Animation Benchmarks Computer Communication Networks Computer Science Constraint modelling Data Structures and Information Theory Datasets Multimedia Information Systems Object recognition Proposals Redundancy Salience Special Purpose and Application-Based Systems |
title | A benchmark dataset and baseline model for co-salient object detection within RGB-D images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T03%3A42%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20benchmark%20dataset%20and%20baseline%20model%20for%20co-salient%20object%20detection%20within%20RGB-D%20images&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Yang,%20Ning&rft.date=2022-10-01&rft.volume=81&rft.issue=25&rft.spage=35831&rft.epage=35842&rft.pages=35831-35842&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-021-11555-y&rft_dat=%3Cproquest_cross%3E2717355593%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2717355593&rft_id=info:pmid/&rfr_iscdi=true |