A comparative study for multiple visual concepts detection in images and videos

Automatic indexing of images and videos is a highly relevant and important research area in multimedia information retrieval. The difficulty of this task is no longer something to prove. Most efforts of the research community have been focusing, in the past, on the detection of single concepts in im...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2016-08, Vol.75 (15), p.8973-8997
Hauptverfasser: Hamadi, Abdelkader, Mulhem, Philippe, Quénot, Georges
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 8997
container_issue 15
container_start_page 8973
container_title Multimedia tools and applications
container_volume 75
creator Hamadi, Abdelkader
Mulhem, Philippe
Quénot, Georges
description Automatic indexing of images and videos is a highly relevant and important research area in multimedia information retrieval. The difficulty of this task is no longer something to prove. Most efforts of the research community have been focusing, in the past, on the detection of single concepts in images/videos, which is already a hard task. With the evolution of information retrieval systems, users’ needs become more abstract, and lead to a larger number of words composing the queries. It is important to think about indexing multimedia documents with more than just individual concepts, to help retrieval systems to answer such complex queries. Few studies addressed specifically the problem of detecting multiple concepts (multi-concept) in images and videos. Most of them concern the detection of concept pairs. These studies showed that such challenge is even greater than the one of single concept detection. In this work, we address the problem of multi-concept detection in images/videos by making a comparative and detailed study. Three types of approaches are considered: 1) building detectors for multi-concept, 2) fusing single concepts detectors and 3) exploiting detectors of a set of single concepts in a stacking scheme. We conducted our evaluations on PASCAL VOC’12 collection regarding the detection of pairs and triplets of concepts. We extended the evaluation process on TRECVid 2013 dataset for infrequent concept pairs’ detection. Our results show that the three types of approaches give globally comparable results for images, but they differ for specific kinds of pairs/triplets. In the case of videos, late fusion of detectors seems to be more effective and efficient when single concept detectors have good performances. Otherwise, directly building bi-concept detectors remains the best alternative, especially if a well-annotated dataset is available. The third approach did not bring additional gain or efficiency.
doi_str_mv 10.1007/s11042-015-2730-2
format Article
fullrecord <record><control><sourceid>proquest_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_01230627v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>4149339531</sourcerecordid><originalsourceid>FETCH-LOGICAL-c383t-1074e58b814cccba918048ac0a128899e503e265ed046b9406c66a087df0806a3</originalsourceid><addsrcrecordid>eNp1kU1LxDAQhosouH78AG8FL3qoziTNxx4XUVdY2IueQzadaqXb1KRd8N-bpSIiCAMzDM87vMObZRcINwigbiMilKwAFAVTHAp2kM1QKF4oxfAwzVxDoQTgcXYS4zsASsHKWbZe5M5vexvs0Owoj8NYfea1D_l2bIembynfNXG0baI6R_0Q84oGckPju7xJtbWvFHPbVYmryMez7Ki2baTz736avTzcP98ti9X68elusSoc13woEFRJQm80ls65jZ2jhlJbBxaZ1vM5CeDEpKAKSrmZlyCdlBa0qmrQIC0_za6nu2-2NX1IPsKn8bYxy8XK7HeAjINkaoeJvZrYPviPkeJgtk101La2Iz9Gg1oDsGRFJvTyD_rux9ClTxKFqLmWQiQKJ8oFH2Og-scBgtnHYaY4kglh9nEYljRs0sTEdq8Ufl3-V_QFUomKjw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1811838655</pqid></control><display><type>article</type><title>A comparative study for multiple visual concepts detection in images and videos</title><source>SpringerLink Journals</source><creator>Hamadi, Abdelkader ; Mulhem, Philippe ; Quénot, Georges</creator><creatorcontrib>Hamadi, Abdelkader ; Mulhem, Philippe ; Quénot, Georges</creatorcontrib><description>Automatic indexing of images and videos is a highly relevant and important research area in multimedia information retrieval. The difficulty of this task is no longer something to prove. Most efforts of the research community have been focusing, in the past, on the detection of single concepts in images/videos, which is already a hard task. With the evolution of information retrieval systems, users’ needs become more abstract, and lead to a larger number of words composing the queries. It is important to think about indexing multimedia documents with more than just individual concepts, to help retrieval systems to answer such complex queries. Few studies addressed specifically the problem of detecting multiple concepts (multi-concept) in images and videos. Most of them concern the detection of concept pairs. These studies showed that such challenge is even greater than the one of single concept detection. In this work, we address the problem of multi-concept detection in images/videos by making a comparative and detailed study. Three types of approaches are considered: 1) building detectors for multi-concept, 2) fusing single concepts detectors and 3) exploiting detectors of a set of single concepts in a stacking scheme. We conducted our evaluations on PASCAL VOC’12 collection regarding the detection of pairs and triplets of concepts. We extended the evaluation process on TRECVid 2013 dataset for infrequent concept pairs’ detection. Our results show that the three types of approaches give globally comparable results for images, but they differ for specific kinds of pairs/triplets. In the case of videos, late fusion of detectors seems to be more effective and efficient when single concept detectors have good performances. Otherwise, directly building bi-concept detectors remains the best alternative, especially if a well-annotated dataset is available. The third approach did not bring additional gain or efficiency.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-015-2730-2</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Analysis ; Annotations ; Boolean ; Buildings ; Computer Communication Networks ; Computer Science ; Data Structures and Information Theory ; Detectors ; Documents ; Fuzzy logic ; Image detection ; Indexing ; Information Retrieval ; Multimedia ; Multimedia computer applications ; Multimedia Information Systems ; Queries ; Semantics ; Sensors ; Special Purpose and Application-Based Systems ; Studies ; Triplets</subject><ispartof>Multimedia tools and applications, 2016-08, Vol.75 (15), p.8973-8997</ispartof><rights>Springer Science+Business Media New York 2015</rights><rights>Springer Science+Business Media New York 2016</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c383t-1074e58b814cccba918048ac0a128899e503e265ed046b9406c66a087df0806a3</citedby><cites>FETCH-LOGICAL-c383t-1074e58b814cccba918048ac0a128899e503e265ed046b9406c66a087df0806a3</cites><orcidid>0000-0002-3245-6462 ; 0000-0003-2117-247X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-015-2730-2$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-015-2730-2$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>230,314,777,781,882,27905,27906,41469,42538,51300</link.rule.ids><backlink>$$Uhttps://hal.science/hal-01230627$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Hamadi, Abdelkader</creatorcontrib><creatorcontrib>Mulhem, Philippe</creatorcontrib><creatorcontrib>Quénot, Georges</creatorcontrib><title>A comparative study for multiple visual concepts detection in images and videos</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Automatic indexing of images and videos is a highly relevant and important research area in multimedia information retrieval. The difficulty of this task is no longer something to prove. Most efforts of the research community have been focusing, in the past, on the detection of single concepts in images/videos, which is already a hard task. With the evolution of information retrieval systems, users’ needs become more abstract, and lead to a larger number of words composing the queries. It is important to think about indexing multimedia documents with more than just individual concepts, to help retrieval systems to answer such complex queries. Few studies addressed specifically the problem of detecting multiple concepts (multi-concept) in images and videos. Most of them concern the detection of concept pairs. These studies showed that such challenge is even greater than the one of single concept detection. In this work, we address the problem of multi-concept detection in images/videos by making a comparative and detailed study. Three types of approaches are considered: 1) building detectors for multi-concept, 2) fusing single concepts detectors and 3) exploiting detectors of a set of single concepts in a stacking scheme. We conducted our evaluations on PASCAL VOC’12 collection regarding the detection of pairs and triplets of concepts. We extended the evaluation process on TRECVid 2013 dataset for infrequent concept pairs’ detection. Our results show that the three types of approaches give globally comparable results for images, but they differ for specific kinds of pairs/triplets. In the case of videos, late fusion of detectors seems to be more effective and efficient when single concept detectors have good performances. Otherwise, directly building bi-concept detectors remains the best alternative, especially if a well-annotated dataset is available. The third approach did not bring additional gain or efficiency.</description><subject>Analysis</subject><subject>Annotations</subject><subject>Boolean</subject><subject>Buildings</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Detectors</subject><subject>Documents</subject><subject>Fuzzy logic</subject><subject>Image detection</subject><subject>Indexing</subject><subject>Information Retrieval</subject><subject>Multimedia</subject><subject>Multimedia computer applications</subject><subject>Multimedia Information Systems</subject><subject>Queries</subject><subject>Semantics</subject><subject>Sensors</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Studies</subject><subject>Triplets</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp1kU1LxDAQhosouH78AG8FL3qoziTNxx4XUVdY2IueQzadaqXb1KRd8N-bpSIiCAMzDM87vMObZRcINwigbiMilKwAFAVTHAp2kM1QKF4oxfAwzVxDoQTgcXYS4zsASsHKWbZe5M5vexvs0Owoj8NYfea1D_l2bIembynfNXG0baI6R_0Q84oGckPju7xJtbWvFHPbVYmryMez7Ki2baTz736avTzcP98ti9X68elusSoc13woEFRJQm80ls65jZ2jhlJbBxaZ1vM5CeDEpKAKSrmZlyCdlBa0qmrQIC0_za6nu2-2NX1IPsKn8bYxy8XK7HeAjINkaoeJvZrYPviPkeJgtk101La2Iz9Gg1oDsGRFJvTyD_rux9ClTxKFqLmWQiQKJ8oFH2Og-scBgtnHYaY4kglh9nEYljRs0sTEdq8Ufl3-V_QFUomKjw</recordid><startdate>20160801</startdate><enddate>20160801</enddate><creator>Hamadi, Abdelkader</creator><creator>Mulhem, Philippe</creator><creator>Quénot, Georges</creator><general>Springer US</general><general>Springer Nature B.V</general><general>Springer Verlag</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0002-3245-6462</orcidid><orcidid>https://orcid.org/0000-0003-2117-247X</orcidid></search><sort><creationdate>20160801</creationdate><title>A comparative study for multiple visual concepts detection in images and videos</title><author>Hamadi, Abdelkader ; Mulhem, Philippe ; Quénot, Georges</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c383t-1074e58b814cccba918048ac0a128899e503e265ed046b9406c66a087df0806a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Analysis</topic><topic>Annotations</topic><topic>Boolean</topic><topic>Buildings</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Detectors</topic><topic>Documents</topic><topic>Fuzzy logic</topic><topic>Image detection</topic><topic>Indexing</topic><topic>Information Retrieval</topic><topic>Multimedia</topic><topic>Multimedia computer applications</topic><topic>Multimedia Information Systems</topic><topic>Queries</topic><topic>Semantics</topic><topic>Sensors</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Studies</topic><topic>Triplets</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hamadi, Abdelkader</creatorcontrib><creatorcontrib>Mulhem, Philippe</creatorcontrib><creatorcontrib>Quénot, Georges</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><collection>Hyper Article en Ligne (HAL)</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hamadi, Abdelkader</au><au>Mulhem, Philippe</au><au>Quénot, Georges</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A comparative study for multiple visual concepts detection in images and videos</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2016-08-01</date><risdate>2016</risdate><volume>75</volume><issue>15</issue><spage>8973</spage><epage>8997</epage><pages>8973-8997</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Automatic indexing of images and videos is a highly relevant and important research area in multimedia information retrieval. The difficulty of this task is no longer something to prove. Most efforts of the research community have been focusing, in the past, on the detection of single concepts in images/videos, which is already a hard task. With the evolution of information retrieval systems, users’ needs become more abstract, and lead to a larger number of words composing the queries. It is important to think about indexing multimedia documents with more than just individual concepts, to help retrieval systems to answer such complex queries. Few studies addressed specifically the problem of detecting multiple concepts (multi-concept) in images and videos. Most of them concern the detection of concept pairs. These studies showed that such challenge is even greater than the one of single concept detection. In this work, we address the problem of multi-concept detection in images/videos by making a comparative and detailed study. Three types of approaches are considered: 1) building detectors for multi-concept, 2) fusing single concepts detectors and 3) exploiting detectors of a set of single concepts in a stacking scheme. We conducted our evaluations on PASCAL VOC’12 collection regarding the detection of pairs and triplets of concepts. We extended the evaluation process on TRECVid 2013 dataset for infrequent concept pairs’ detection. Our results show that the three types of approaches give globally comparable results for images, but they differ for specific kinds of pairs/triplets. In the case of videos, late fusion of detectors seems to be more effective and efficient when single concept detectors have good performances. Otherwise, directly building bi-concept detectors remains the best alternative, especially if a well-annotated dataset is available. The third approach did not bring additional gain or efficiency.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-015-2730-2</doi><tpages>25</tpages><orcidid>https://orcid.org/0000-0002-3245-6462</orcidid><orcidid>https://orcid.org/0000-0003-2117-247X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2016-08, Vol.75 (15), p.8973-8997
issn 1380-7501
1573-7721
language eng
recordid cdi_hal_primary_oai_HAL_hal_01230627v1
source SpringerLink Journals
subjects Analysis
Annotations
Boolean
Buildings
Computer Communication Networks
Computer Science
Data Structures and Information Theory
Detectors
Documents
Fuzzy logic
Image detection
Indexing
Information Retrieval
Multimedia
Multimedia computer applications
Multimedia Information Systems
Queries
Semantics
Sensors
Special Purpose and Application-Based Systems
Studies
Triplets
title A comparative study for multiple visual concepts detection in images and videos
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T10%3A47%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20comparative%20study%20for%20multiple%20visual%20concepts%20detection%20in%20images%20and%20videos&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Hamadi,%20Abdelkader&rft.date=2016-08-01&rft.volume=75&rft.issue=15&rft.spage=8973&rft.epage=8997&rft.pages=8973-8997&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-015-2730-2&rft_dat=%3Cproquest_hal_p%3E4149339531%3C/proquest_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1811838655&rft_id=info:pmid/&rfr_iscdi=true