Semantic-enriched Visual Vocabulary Construction in a Weakly Supervised Context

One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2015-12
Hauptverfasser: Marian-Andrei Rizoiu, Velcin, Julien, Lallich, Stéphane
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Marian-Andrei Rizoiu
Velcin, Julien
Lallich, Stéphane
description One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation.
doi_str_mv 10.48550/arxiv.1512.04605
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1512_04605</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2083825514</sourcerecordid><originalsourceid>FETCH-LOGICAL-a524-98aa9b3d48c81cb78d4fa786efe187a4d0b6290e98ef533b0ef07a0ad4f4ca893</originalsourceid><addsrcrecordid>eNotj8tqwzAUREWh0JDmA7qqoGunVy9bXpbQFwSySEiX5lqWqVLHdiU7JH9fJelqNmeGOYQ8MJhLrRQ8oz-6w5wpxucgU1A3ZMKFYImWnN-RWQg7AOBpxpUSE7Ja2z22gzOJbb0z37aiWxdGbOi2M1iODfoTXXRtGPxoBte11LUU6ZfFn-ZE12Nv_cGF2IrMYI_DPbmtsQl29p9Tsnl73Sw-kuXq_XPxskxQcZnkGjEvRSW10cyUma5kjZlObW2ZzlBWUKY8B5trWyshSrA1ZAgYMWlQ52JKHq-zF9ui924fjxZn6-JiHYmnK9H77ne0YSh23ejb-KngoIWO-kyKP6OCW2c</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2083825514</pqid></control><display><type>article</type><title>Semantic-enriched Visual Vocabulary Construction in a Weakly Supervised Context</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Marian-Andrei Rizoiu ; Velcin, Julien ; Lallich, Stéphane</creator><creatorcontrib>Marian-Andrei Rizoiu ; Velcin, Julien ; Lallich, Stéphane</creatorcontrib><description>One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1512.04605</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Computer Science - Computer Vision and Pattern Recognition ; Enrichment ; Image classification ; Knowledge representation ; Labels ; Machine learning ; Semantics</subject><ispartof>arXiv.org, 2015-12</ispartof><rights>2015. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.1512.04605$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.3233/IDA-140702$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Marian-Andrei Rizoiu</creatorcontrib><creatorcontrib>Velcin, Julien</creatorcontrib><creatorcontrib>Lallich, Stéphane</creatorcontrib><title>Semantic-enriched Visual Vocabulary Construction in a Weakly Supervised Context</title><title>arXiv.org</title><description>One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation.</description><subject>Algorithms</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Enrichment</subject><subject>Image classification</subject><subject>Knowledge representation</subject><subject>Labels</subject><subject>Machine learning</subject><subject>Semantics</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotj8tqwzAUREWh0JDmA7qqoGunVy9bXpbQFwSySEiX5lqWqVLHdiU7JH9fJelqNmeGOYQ8MJhLrRQ8oz-6w5wpxucgU1A3ZMKFYImWnN-RWQg7AOBpxpUSE7Ja2z22gzOJbb0z37aiWxdGbOi2M1iODfoTXXRtGPxoBte11LUU6ZfFn-ZE12Nv_cGF2IrMYI_DPbmtsQl29p9Tsnl73Sw-kuXq_XPxskxQcZnkGjEvRSW10cyUma5kjZlObW2ZzlBWUKY8B5trWyshSrA1ZAgYMWlQ52JKHq-zF9ui924fjxZn6-JiHYmnK9H77ne0YSh23ejb-KngoIWO-kyKP6OCW2c</recordid><startdate>20151214</startdate><enddate>20151214</enddate><creator>Marian-Andrei Rizoiu</creator><creator>Velcin, Julien</creator><creator>Lallich, Stéphane</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20151214</creationdate><title>Semantic-enriched Visual Vocabulary Construction in a Weakly Supervised Context</title><author>Marian-Andrei Rizoiu ; Velcin, Julien ; Lallich, Stéphane</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a524-98aa9b3d48c81cb78d4fa786efe187a4d0b6290e98ef533b0ef07a0ad4f4ca893</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Algorithms</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Enrichment</topic><topic>Image classification</topic><topic>Knowledge representation</topic><topic>Labels</topic><topic>Machine learning</topic><topic>Semantics</topic><toplevel>online_resources</toplevel><creatorcontrib>Marian-Andrei Rizoiu</creatorcontrib><creatorcontrib>Velcin, Julien</creatorcontrib><creatorcontrib>Lallich, Stéphane</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Marian-Andrei Rizoiu</au><au>Velcin, Julien</au><au>Lallich, Stéphane</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Semantic-enriched Visual Vocabulary Construction in a Weakly Supervised Context</atitle><jtitle>arXiv.org</jtitle><date>2015-12-14</date><risdate>2015</risdate><eissn>2331-8422</eissn><abstract>One of the prevalent learning tasks involving images is content-based image classification. This is a difficult task especially because the low-level features used to digitally describe images usually capture little information about the semantics of the images. In this paper, we tackle this difficulty by enriching the semantic content of the image representation by using external knowledge. The underlying hypothesis of our work is that creating a more semantically rich representation for images would yield higher machine learning performances, without the need to modify the learning algorithms themselves. The external semantic information is presented under the form of non-positional image labels, therefore positioning our work in a weakly supervised context. Two approaches are proposed: the first one leverages the labels into the visual vocabulary construction algorithm, the result being dedicated visual vocabularies. The second approach adds a filtering phase as a pre-processing of the vocabulary construction. Known positive and known negative sets are constructed and features that are unlikely to be associated with the objects denoted by the labels are filtered. We apply our proposition to the task of content-based image classification and we show that semantically enriching the image representation yields higher classification performances than the baseline representation.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1512.04605</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2015-12
issn 2331-8422
language eng
recordid cdi_arxiv_primary_1512_04605
source arXiv.org; Free E- Journals
subjects Algorithms
Computer Science - Computer Vision and Pattern Recognition
Enrichment
Image classification
Knowledge representation
Labels
Machine learning
Semantics
title Semantic-enriched Visual Vocabulary Construction in a Weakly Supervised Context
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T13%3A35%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Semantic-enriched%20Visual%20Vocabulary%20Construction%20in%20a%20Weakly%20Supervised%20Context&rft.jtitle=arXiv.org&rft.au=Marian-Andrei%20Rizoiu&rft.date=2015-12-14&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1512.04605&rft_dat=%3Cproquest_arxiv%3E2083825514%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2083825514&rft_id=info:pmid/&rfr_iscdi=true