Saliency Boosting: a novel framework to refine salient object detection

Salient object detection is a challenging research area and various methods have been proposed in literature. However, these methods usually focus on detecting salient objects in particular type of images only and fail when exposed to a variety of images. Here, we address this problem by proposing a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Artificial intelligence review 2020-06, Vol.53 (5), p.3731-3772
Hauptverfasser: Singh, Vivek Kumar, Kumar, Nitin, Madhavan, Suresh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 3772
container_issue 5
container_start_page 3731
container_title The Artificial intelligence review
container_volume 53
creator Singh, Vivek Kumar
Kumar, Nitin
Madhavan, Suresh
description Salient object detection is a challenging research area and various methods have been proposed in literature. However, these methods usually focus on detecting salient objects in particular type of images only and fail when exposed to a variety of images. Here, we address this problem by proposing a novel framework called Saliency Boosting for refining saliency maps. In particular, the framework trains an Artificial Neural Network Regressor to refine initial saliency measures which are obtained from existing saliency methods. Extensive experiments on seven publicly available datasets viz. MSRA10K-test, DUT-OMRON-test, ECSSD, PASCAL-S, SED2, THUR15K, and HKU-IS have been performed to determine the effectiveness of the proposed framework. The performance of the proposed framework is measured in terms of Precision, Recall, F-Measure, Precision–Recall curve, Overlapping Ratio, Area Under the Curve and Receiver Operating Characteristic curve. The proposed framework is compared with 20 state-of-the-art-methods including best performing methods in the last decade. Further, performance of the proposed framework is better than each individual saliency detection method used in the framework. The proposed framework outperforms or is comparable with 20 state-of-the-art methods in terms of the aforementioned performance measures on all datasets.
doi_str_mv 10.1007/s10462-019-09777-6
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2403242277</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A718214181</galeid><sourcerecordid>A718214181</sourcerecordid><originalsourceid>FETCH-LOGICAL-c386t-45216f53789ef70785a539ae283e86b6912f0dc3a1cccc0781c78e1850a31e3f3</originalsourceid><addsrcrecordid>eNp9kU1LAzEQhoMoWKt_wFPA89Z87CZZb7VoFQoe1HNI09mytZvUJFX6741doQhichiYed5JZl6ELikZUULkdaSkFKwgtC5ILaUsxBEa0EryQub8MRoQJuqCKUZP0VmMK0JIxUo-QNNns27B2R2-9T6m1i1vsMHOf8AaN8F08OnDG04eB2haBzju8YT9fAU24QWkHFrvztFJY9YRLn7iEL3e371MHorZ0_RxMp4VliuRirJiVDQVl6qGRhKpKlPx2gBTHJSYi5qyhiwsN9Tmk-vUSgVUVcRwCrzhQ3TV990E_76FmPTKb4PLT2pWEs5KxqQ8UEuzBt26xqdgbNdGq8eS5i2UVNFMjf6g8l1A11rv8sQ5_0vAeoENPsa8Eb0JbWfCTlOiv33QvQ86-6D3PmiRRbwXxQy7JYTDj_9RfQFhaIhd</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2403242277</pqid></control><display><type>article</type><title>Saliency Boosting: a novel framework to refine salient object detection</title><source>SpringerLink Journals - AutoHoldings</source><creator>Singh, Vivek Kumar ; Kumar, Nitin ; Madhavan, Suresh</creator><creatorcontrib>Singh, Vivek Kumar ; Kumar, Nitin ; Madhavan, Suresh</creatorcontrib><description>Salient object detection is a challenging research area and various methods have been proposed in literature. However, these methods usually focus on detecting salient objects in particular type of images only and fail when exposed to a variety of images. Here, we address this problem by proposing a novel framework called Saliency Boosting for refining saliency maps. In particular, the framework trains an Artificial Neural Network Regressor to refine initial saliency measures which are obtained from existing saliency methods. Extensive experiments on seven publicly available datasets viz. MSRA10K-test, DUT-OMRON-test, ECSSD, PASCAL-S, SED2, THUR15K, and HKU-IS have been performed to determine the effectiveness of the proposed framework. The performance of the proposed framework is measured in terms of Precision, Recall, F-Measure, Precision–Recall curve, Overlapping Ratio, Area Under the Curve and Receiver Operating Characteristic curve. The proposed framework is compared with 20 state-of-the-art-methods including best performing methods in the last decade. Further, performance of the proposed framework is better than each individual saliency detection method used in the framework. The proposed framework outperforms or is comparable with 20 state-of-the-art methods in terms of the aforementioned performance measures on all datasets.</description><identifier>ISSN: 0269-2821</identifier><identifier>EISSN: 1573-7462</identifier><identifier>DOI: 10.1007/s10462-019-09777-6</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>Algorithms ; Artificial Intelligence ; Artificial neural networks ; Computer Science ; Datasets ; Image retrieval ; Machine learning ; Medical imaging equipment ; Neural networks ; Object recognition ; Parameter estimation ; Recall ; Salience</subject><ispartof>The Artificial intelligence review, 2020-06, Vol.53 (5), p.3731-3772</ispartof><rights>Springer Nature B.V. 2019</rights><rights>COPYRIGHT 2020 Springer</rights><rights>Springer Nature B.V. 2019.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c386t-45216f53789ef70785a539ae283e86b6912f0dc3a1cccc0781c78e1850a31e3f3</citedby><cites>FETCH-LOGICAL-c386t-45216f53789ef70785a539ae283e86b6912f0dc3a1cccc0781c78e1850a31e3f3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10462-019-09777-6$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10462-019-09777-6$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Singh, Vivek Kumar</creatorcontrib><creatorcontrib>Kumar, Nitin</creatorcontrib><creatorcontrib>Madhavan, Suresh</creatorcontrib><title>Saliency Boosting: a novel framework to refine salient object detection</title><title>The Artificial intelligence review</title><addtitle>Artif Intell Rev</addtitle><description>Salient object detection is a challenging research area and various methods have been proposed in literature. However, these methods usually focus on detecting salient objects in particular type of images only and fail when exposed to a variety of images. Here, we address this problem by proposing a novel framework called Saliency Boosting for refining saliency maps. In particular, the framework trains an Artificial Neural Network Regressor to refine initial saliency measures which are obtained from existing saliency methods. Extensive experiments on seven publicly available datasets viz. MSRA10K-test, DUT-OMRON-test, ECSSD, PASCAL-S, SED2, THUR15K, and HKU-IS have been performed to determine the effectiveness of the proposed framework. The performance of the proposed framework is measured in terms of Precision, Recall, F-Measure, Precision–Recall curve, Overlapping Ratio, Area Under the Curve and Receiver Operating Characteristic curve. The proposed framework is compared with 20 state-of-the-art-methods including best performing methods in the last decade. Further, performance of the proposed framework is better than each individual saliency detection method used in the framework. The proposed framework outperforms or is comparable with 20 state-of-the-art methods in terms of the aforementioned performance measures on all datasets.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Image retrieval</subject><subject>Machine learning</subject><subject>Medical imaging equipment</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Parameter estimation</subject><subject>Recall</subject><subject>Salience</subject><issn>0269-2821</issn><issn>1573-7462</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kU1LAzEQhoMoWKt_wFPA89Z87CZZb7VoFQoe1HNI09mytZvUJFX6741doQhichiYed5JZl6ELikZUULkdaSkFKwgtC5ILaUsxBEa0EryQub8MRoQJuqCKUZP0VmMK0JIxUo-QNNns27B2R2-9T6m1i1vsMHOf8AaN8F08OnDG04eB2haBzju8YT9fAU24QWkHFrvztFJY9YRLn7iEL3e371MHorZ0_RxMp4VliuRirJiVDQVl6qGRhKpKlPx2gBTHJSYi5qyhiwsN9Tmk-vUSgVUVcRwCrzhQ3TV990E_76FmPTKb4PLT2pWEs5KxqQ8UEuzBt26xqdgbNdGq8eS5i2UVNFMjf6g8l1A11rv8sQ5_0vAeoENPsa8Eb0JbWfCTlOiv33QvQ86-6D3PmiRRbwXxQy7JYTDj_9RfQFhaIhd</recordid><startdate>20200601</startdate><enddate>20200601</enddate><creator>Singh, Vivek Kumar</creator><creator>Kumar, Nitin</creator><creator>Madhavan, Suresh</creator><general>Springer Netherlands</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>CNYFK</scope><scope>DWQXO</scope><scope>E3H</scope><scope>F2A</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M1O</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><scope>Q9U</scope></search><sort><creationdate>20200601</creationdate><title>Saliency Boosting: a novel framework to refine salient object detection</title><author>Singh, Vivek Kumar ; Kumar, Nitin ; Madhavan, Suresh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c386t-45216f53789ef70785a539ae283e86b6912f0dc3a1cccc0781c78e1850a31e3f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Image retrieval</topic><topic>Machine learning</topic><topic>Medical imaging equipment</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Parameter estimation</topic><topic>Recall</topic><topic>Salience</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Singh, Vivek Kumar</creatorcontrib><creatorcontrib>Kumar, Nitin</creatorcontrib><creatorcontrib>Madhavan, Suresh</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Social Science Premium Collection</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Library &amp; Information Science Collection</collection><collection>ProQuest Central Korea</collection><collection>Library &amp; Information Sciences Abstracts (LISA)</collection><collection>Library &amp; Information Science Abstracts (LISA)</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Library Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><collection>ProQuest Central Basic</collection><jtitle>The Artificial intelligence review</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Singh, Vivek Kumar</au><au>Kumar, Nitin</au><au>Madhavan, Suresh</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Saliency Boosting: a novel framework to refine salient object detection</atitle><jtitle>The Artificial intelligence review</jtitle><stitle>Artif Intell Rev</stitle><date>2020-06-01</date><risdate>2020</risdate><volume>53</volume><issue>5</issue><spage>3731</spage><epage>3772</epage><pages>3731-3772</pages><issn>0269-2821</issn><eissn>1573-7462</eissn><abstract>Salient object detection is a challenging research area and various methods have been proposed in literature. However, these methods usually focus on detecting salient objects in particular type of images only and fail when exposed to a variety of images. Here, we address this problem by proposing a novel framework called Saliency Boosting for refining saliency maps. In particular, the framework trains an Artificial Neural Network Regressor to refine initial saliency measures which are obtained from existing saliency methods. Extensive experiments on seven publicly available datasets viz. MSRA10K-test, DUT-OMRON-test, ECSSD, PASCAL-S, SED2, THUR15K, and HKU-IS have been performed to determine the effectiveness of the proposed framework. The performance of the proposed framework is measured in terms of Precision, Recall, F-Measure, Precision–Recall curve, Overlapping Ratio, Area Under the Curve and Receiver Operating Characteristic curve. The proposed framework is compared with 20 state-of-the-art-methods including best performing methods in the last decade. Further, performance of the proposed framework is better than each individual saliency detection method used in the framework. The proposed framework outperforms or is comparable with 20 state-of-the-art methods in terms of the aforementioned performance measures on all datasets.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><doi>10.1007/s10462-019-09777-6</doi><tpages>42</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0269-2821
ispartof The Artificial intelligence review, 2020-06, Vol.53 (5), p.3731-3772
issn 0269-2821
1573-7462
language eng
recordid cdi_proquest_journals_2403242277
source SpringerLink Journals - AutoHoldings
subjects Algorithms
Artificial Intelligence
Artificial neural networks
Computer Science
Datasets
Image retrieval
Machine learning
Medical imaging equipment
Neural networks
Object recognition
Parameter estimation
Recall
Salience
title Saliency Boosting: a novel framework to refine salient object detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T21%3A23%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Saliency%20Boosting:%20a%20novel%20framework%20to%20refine%20salient%20object%20detection&rft.jtitle=The%20Artificial%20intelligence%20review&rft.au=Singh,%20Vivek%20Kumar&rft.date=2020-06-01&rft.volume=53&rft.issue=5&rft.spage=3731&rft.epage=3772&rft.pages=3731-3772&rft.issn=0269-2821&rft.eissn=1573-7462&rft_id=info:doi/10.1007/s10462-019-09777-6&rft_dat=%3Cgale_proqu%3EA718214181%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2403242277&rft_id=info:pmid/&rft_galeid=A718214181&rfr_iscdi=true