Contextual Kernel and Spectral Methods for Learning the Semantics of Images
This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quant...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on image processing 2011-06, Vol.20 (6), p.1739-1750 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1750 |
---|---|
container_issue | 6 |
container_start_page | 1739 |
container_title | IEEE transactions on image processing |
container_volume | 20 |
creator | Zhiwu Lu Ip, H H S Yuxin Peng |
description | This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quantify the similarity between images. Specifically, we represent each image as a 2-D sequence of visual words and measure the similarity between two 2-D sequences using the shared occurrences of s -length 1-D subsequences by decomposing each 2-D sequence into two orthogonal 1-D sequences. Based on our proposed spatial string kernel, we further formulate automatic image annotation as a contextual keyword propagation problem, which can be solved very efficiently by linear programming. Unlike the traditional relevance models that treat each keyword independently, the proposed contextual kernel method for keyword propagation takes into account the semantic context of annotation keywords and propagates multiple keywords simultaneously. Significantly, this type of semantic context can also be incorporated into spectral embedding for refining the annotations of images predicted by keyword propagation. Experiments on three standard image datasets demonstrate that our contextual kernel and spectral methods can achieve significantly better results than the state of the art. |
doi_str_mv | 10.1109/TIP.2010.2103082 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_868030394</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5678649</ieee_id><sourcerecordid>889443524</sourcerecordid><originalsourceid>FETCH-LOGICAL-c407t-b3e2df3d5473253768ae9d4905424dfaed51c6bfb321a14fd1e5605fee229dad3</originalsourceid><addsrcrecordid>eNqF0U1r3DAQBmARUpqP9h4IBBEoPTnVSCPbOoYlTZdsaSHp2WitUeJgyxvJhvbfV8tuU-ilJ309M0h6GTsDcQUgzKeH5fcrKfJKglCilgfsGAxCIQTKwzwXuioqQHPETlJ6FgJQQ_mWHUkAo1RVHrO7xRgm-jnNtud3FAP13AbH7zfUTjHvfaXpaXSJ-zHyFdkYuvDIpyfi9zTYMHVt4qPny8E-UnrH3njbJ3q_H0_Zj883D4svxerb7XJxvSpaFNVUrBVJ55XTWCmp8y1qS8ahERolOm_JaWjLtV8rCRbQOyBdCu2JpDTOOnXKPu76buL4MlOamqFLLfW9DTTOqalrg6i0xP_Lss7_psxWXv4jn8c5hvyMjCosa2VERmKH2jimFMk3m9gNNv5qQDTbQJocSLMNpNkHkksu9n3n9UDuteBPAhl82AObWtv7aEPbpb8OoQasTHbnO9cR0euxLqu6RKN-A1rFmic</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>867468390</pqid></control><display><type>article</type><title>Contextual Kernel and Spectral Methods for Learning the Semantics of Images</title><source>IEEE Electronic Library (IEL)</source><creator>Zhiwu Lu ; Ip, H H S ; Yuxin Peng</creator><creatorcontrib>Zhiwu Lu ; Ip, H H S ; Yuxin Peng</creatorcontrib><description>This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quantify the similarity between images. Specifically, we represent each image as a 2-D sequence of visual words and measure the similarity between two 2-D sequences using the shared occurrences of s -length 1-D subsequences by decomposing each 2-D sequence into two orthogonal 1-D sequences. Based on our proposed spatial string kernel, we further formulate automatic image annotation as a contextual keyword propagation problem, which can be solved very efficiently by linear programming. Unlike the traditional relevance models that treat each keyword independently, the proposed contextual kernel method for keyword propagation takes into account the semantic context of annotation keywords and propagates multiple keywords simultaneously. Significantly, this type of semantic context can also be incorporated into spectral embedding for refining the annotations of images predicted by keyword propagation. Experiments on three standard image datasets demonstrate that our contextual kernel and spectral methods can achieve significantly better results than the state of the art.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2010.2103082</identifier><identifier>PMID: 21193376</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York, NY: IEEE</publisher><subject>Algorithms ; Annotation refinement ; Annotations ; Applied sciences ; Artificial Intelligence ; Context ; Correlation ; Documentation - methods ; Exact sciences and technology ; Image Enhancement - methods ; Image Interpretation, Computer-Assisted - methods ; Image processing ; Information, signal and communications theory ; Kernel ; kernel methods ; Kernels ; keyword propagation ; Linear programming ; Manifolds ; Natural Language Processing ; Pattern Recognition, Automated - methods ; Propagation ; Reproducibility of Results ; Semantics ; Sensitivity and Specificity ; Signal processing ; Similarity ; spectral embedding ; Spectral methods ; string kernel ; Strings ; Studies ; Telecommunications and information theory ; Training ; Visual ; visual words ; Visualization</subject><ispartof>IEEE transactions on image processing, 2011-06, Vol.20 (6), p.1739-1750</ispartof><rights>2015 INIST-CNRS</rights><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Jun 2011</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c407t-b3e2df3d5473253768ae9d4905424dfaed51c6bfb321a14fd1e5605fee229dad3</citedby><cites>FETCH-LOGICAL-c407t-b3e2df3d5473253768ae9d4905424dfaed51c6bfb321a14fd1e5605fee229dad3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5678649$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5678649$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=24181479$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/21193376$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhiwu Lu</creatorcontrib><creatorcontrib>Ip, H H S</creatorcontrib><creatorcontrib>Yuxin Peng</creatorcontrib><title>Contextual Kernel and Spectral Methods for Learning the Semantics of Images</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quantify the similarity between images. Specifically, we represent each image as a 2-D sequence of visual words and measure the similarity between two 2-D sequences using the shared occurrences of s -length 1-D subsequences by decomposing each 2-D sequence into two orthogonal 1-D sequences. Based on our proposed spatial string kernel, we further formulate automatic image annotation as a contextual keyword propagation problem, which can be solved very efficiently by linear programming. Unlike the traditional relevance models that treat each keyword independently, the proposed contextual kernel method for keyword propagation takes into account the semantic context of annotation keywords and propagates multiple keywords simultaneously. Significantly, this type of semantic context can also be incorporated into spectral embedding for refining the annotations of images predicted by keyword propagation. Experiments on three standard image datasets demonstrate that our contextual kernel and spectral methods can achieve significantly better results than the state of the art.</description><subject>Algorithms</subject><subject>Annotation refinement</subject><subject>Annotations</subject><subject>Applied sciences</subject><subject>Artificial Intelligence</subject><subject>Context</subject><subject>Correlation</subject><subject>Documentation - methods</subject><subject>Exact sciences and technology</subject><subject>Image Enhancement - methods</subject><subject>Image Interpretation, Computer-Assisted - methods</subject><subject>Image processing</subject><subject>Information, signal and communications theory</subject><subject>Kernel</subject><subject>kernel methods</subject><subject>Kernels</subject><subject>keyword propagation</subject><subject>Linear programming</subject><subject>Manifolds</subject><subject>Natural Language Processing</subject><subject>Pattern Recognition, Automated - methods</subject><subject>Propagation</subject><subject>Reproducibility of Results</subject><subject>Semantics</subject><subject>Sensitivity and Specificity</subject><subject>Signal processing</subject><subject>Similarity</subject><subject>spectral embedding</subject><subject>Spectral methods</subject><subject>string kernel</subject><subject>Strings</subject><subject>Studies</subject><subject>Telecommunications and information theory</subject><subject>Training</subject><subject>Visual</subject><subject>visual words</subject><subject>Visualization</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2011</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNqF0U1r3DAQBmARUpqP9h4IBBEoPTnVSCPbOoYlTZdsaSHp2WitUeJgyxvJhvbfV8tuU-ilJ309M0h6GTsDcQUgzKeH5fcrKfJKglCilgfsGAxCIQTKwzwXuioqQHPETlJ6FgJQQ_mWHUkAo1RVHrO7xRgm-jnNtud3FAP13AbH7zfUTjHvfaXpaXSJ-zHyFdkYuvDIpyfi9zTYMHVt4qPny8E-UnrH3njbJ3q_H0_Zj883D4svxerb7XJxvSpaFNVUrBVJ55XTWCmp8y1qS8ahERolOm_JaWjLtV8rCRbQOyBdCu2JpDTOOnXKPu76buL4MlOamqFLLfW9DTTOqalrg6i0xP_Lss7_psxWXv4jn8c5hvyMjCosa2VERmKH2jimFMk3m9gNNv5qQDTbQJocSLMNpNkHkksu9n3n9UDuteBPAhl82AObWtv7aEPbpb8OoQasTHbnO9cR0euxLqu6RKN-A1rFmic</recordid><startdate>20110601</startdate><enddate>20110601</enddate><creator>Zhiwu Lu</creator><creator>Ip, H H S</creator><creator>Yuxin Peng</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>IQODW</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><scope>F28</scope><scope>FR3</scope></search><sort><creationdate>20110601</creationdate><title>Contextual Kernel and Spectral Methods for Learning the Semantics of Images</title><author>Zhiwu Lu ; Ip, H H S ; Yuxin Peng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c407t-b3e2df3d5473253768ae9d4905424dfaed51c6bfb321a14fd1e5605fee229dad3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2011</creationdate><topic>Algorithms</topic><topic>Annotation refinement</topic><topic>Annotations</topic><topic>Applied sciences</topic><topic>Artificial Intelligence</topic><topic>Context</topic><topic>Correlation</topic><topic>Documentation - methods</topic><topic>Exact sciences and technology</topic><topic>Image Enhancement - methods</topic><topic>Image Interpretation, Computer-Assisted - methods</topic><topic>Image processing</topic><topic>Information, signal and communications theory</topic><topic>Kernel</topic><topic>kernel methods</topic><topic>Kernels</topic><topic>keyword propagation</topic><topic>Linear programming</topic><topic>Manifolds</topic><topic>Natural Language Processing</topic><topic>Pattern Recognition, Automated - methods</topic><topic>Propagation</topic><topic>Reproducibility of Results</topic><topic>Semantics</topic><topic>Sensitivity and Specificity</topic><topic>Signal processing</topic><topic>Similarity</topic><topic>spectral embedding</topic><topic>Spectral methods</topic><topic>string kernel</topic><topic>Strings</topic><topic>Studies</topic><topic>Telecommunications and information theory</topic><topic>Training</topic><topic>Visual</topic><topic>visual words</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhiwu Lu</creatorcontrib><creatorcontrib>Ip, H H S</creatorcontrib><creatorcontrib>Yuxin Peng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Pascal-Francis</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhiwu Lu</au><au>Ip, H H S</au><au>Yuxin Peng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Contextual Kernel and Spectral Methods for Learning the Semantics of Images</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2011-06-01</date><risdate>2011</risdate><volume>20</volume><issue>6</issue><spage>1739</spage><epage>1750</epage><pages>1739-1750</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quantify the similarity between images. Specifically, we represent each image as a 2-D sequence of visual words and measure the similarity between two 2-D sequences using the shared occurrences of s -length 1-D subsequences by decomposing each 2-D sequence into two orthogonal 1-D sequences. Based on our proposed spatial string kernel, we further formulate automatic image annotation as a contextual keyword propagation problem, which can be solved very efficiently by linear programming. Unlike the traditional relevance models that treat each keyword independently, the proposed contextual kernel method for keyword propagation takes into account the semantic context of annotation keywords and propagates multiple keywords simultaneously. Significantly, this type of semantic context can also be incorporated into spectral embedding for refining the annotations of images predicted by keyword propagation. Experiments on three standard image datasets demonstrate that our contextual kernel and spectral methods can achieve significantly better results than the state of the art.</abstract><cop>New York, NY</cop><pub>IEEE</pub><pmid>21193376</pmid><doi>10.1109/TIP.2010.2103082</doi><tpages>12</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1057-7149 |
ispartof | IEEE transactions on image processing, 2011-06, Vol.20 (6), p.1739-1750 |
issn | 1057-7149 1941-0042 |
language | eng |
recordid | cdi_proquest_miscellaneous_868030394 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Annotation refinement Annotations Applied sciences Artificial Intelligence Context Correlation Documentation - methods Exact sciences and technology Image Enhancement - methods Image Interpretation, Computer-Assisted - methods Image processing Information, signal and communications theory Kernel kernel methods Kernels keyword propagation Linear programming Manifolds Natural Language Processing Pattern Recognition, Automated - methods Propagation Reproducibility of Results Semantics Sensitivity and Specificity Signal processing Similarity spectral embedding Spectral methods string kernel Strings Studies Telecommunications and information theory Training Visual visual words Visualization |
title | Contextual Kernel and Spectral Methods for Learning the Semantics of Images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T16%3A32%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Contextual%20Kernel%20and%20Spectral%20Methods%20for%20Learning%20the%20Semantics%20of%20Images&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Zhiwu%20Lu&rft.date=2011-06-01&rft.volume=20&rft.issue=6&rft.spage=1739&rft.epage=1750&rft.pages=1739-1750&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2010.2103082&rft_dat=%3Cproquest_RIE%3E889443524%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=867468390&rft_id=info:pmid/21193376&rft_ieee_id=5678649&rfr_iscdi=true |