A Visual Annotation Framework Using Common-Sensical and Linguistic Relationships for Semantic Media Retrieval

In this paper, we present a novel image annotation approach with an emphasis on – (a) common sense based semantic propagation, (b) visual annotation interfaces and (c) novel evaluation schemes. The annotation system is interactive, intuitive and real-time. We attempt to propagate semantics of the an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shevade, Bageshree, Sundaram, Hari
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we present a novel image annotation approach with an emphasis on – (a) common sense based semantic propagation, (b) visual annotation interfaces and (c) novel evaluation schemes. The annotation system is interactive, intuitive and real-time. We attempt to propagate semantics of the annotations, by using WordNet and ConceptNet, and low-level features extracted from the images. We introduce novel semantic dissimilarity measures, and propagation frameworks. We develop a novel visual annotation interface that allows a user to group images by creating visual concepts using direct manipulation metaphors without manual annotation. We also develop a new evaluation technique for annotation that is based on relationship between concepts based on commonsensical relationships. Our Experimental results on three different datasets, indicate that the annotation system performs very well. The semantic propagation results are good – we converge close to the semantics of the image by annotating a small number (~16.8%) of database images.
ISSN:0302-9743
1611-3349
DOI:10.1007/11670834_20