Cross-Modal Interaction and Integration with Relevance Feedback for Medical Image Retrieval

This paper presents a cross-modal approach of image retrieval from a medical image collection which integrates visual information based on purely low-level image contents and case related textual information from the annotated XML files. The advantages of both the modalities are exploited by involvi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Cham, Tat-Jen
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper presents a cross-modal approach of image retrieval from a medical image collection which integrates visual information based on purely low-level image contents and case related textual information from the annotated XML files. The advantages of both the modalities are exploited by involving the users in the retrieval loop. For content-based search, low-level visual features are extracted in vector form at different image representations. For text-based search, keywords from the annotated files are extracted and indexed by employing the vector space model of information retrieval. Based on the relevance feedback, textual and visual query refinements are performed and user’s perceived semantics are propagated from one modality to another. Finally, the most similar images are obtained by a linear combination of similarity matching and re-ordering in a pre-filtered image set. The experiments are performed on a collection of diverse medical images with case-based annotation of each image by experts. It demonstrates the flexibility and the effectiveness of the proposed approach compared to using only a single modality or without any feedback information.
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-540-69423-6_43