Enhance Composed Image Retrieval via Multi-level Collaborative Localization and Semantic Activeness Perception
Composed image retrieval (CIR) is an emerging and challenging research task that combines two modalities, a reference image, and a modification text, into one query to retrieve the target image. In online shopping scenarios, the user would use the modification text as feedback to describe the differ...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on multimedia 2024-01, Vol.26, p.1-12 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Composed image retrieval (CIR) is an emerging and challenging research task that combines two modalities, a reference image, and a modification text, into one query to retrieve the target image. In online shopping scenarios, the user would use the modification text as feedback to describe the difference between the reference and the desired image. In order to handle the task, there must be two main problems needed to be addressed. One is the localization problem: how to precisely find those spatial areas of the image mentioned by the text. The other is the modification problem: how to effectively modify the image semantics based on the text. However, existing methods merely fuse information coarsely from the two-modality, while the accurate spatial and semantic correspondence between these two heterogeneous features tends to be neglected. Therefore, image details cannot be precisely located and modified. To this end, we consider integrating information from the two modalities more accurately from spatial and semantic aspects. Thus, we propose an end-to-end framework for the CIR task, which contains three key components, i.e., Multi-level Collaborative Localization module (MCL), Differential Semantics Discrimination module (DSD), and Image Difference Enhancement constraints (IDE). Specifically, to solve the localization problem, MCL precisely locates the text to the image areas by collaboratively using text positioning information on multiple image layers. For the modification problem, DSD builds a distribution to evaluate the modification possibility of each image semantic dimension, and IDE effectively learns the modification patterns of text against image embedding based on the distribution. Extensive experiments on three datasets show that the proposed method achieves outstanding performance against the SOTA methods. |
---|---|
ISSN: | 1520-9210 1941-0077 |
DOI: | 10.1109/TMM.2023.3273466 |