Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning
Visual retrieval system faces frequent model update and deployment. It is a heavy workload to re-extract features of the whole database every time.Feature compatibility enables the learned new visual features to be directly compared with the old features stored in the database. In this way, when upd...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Visual retrieval system faces frequent model update and deployment. It is a
heavy workload to re-extract features of the whole database every time.Feature
compatibility enables the learned new visual features to be directly compared
with the old features stored in the database. In this way, when updating the
deployed model, we can bypass the inflexible and time-consuming feature
re-extraction process. However, the old feature space that needs to be
compatible is not ideal and faces the distribution discrepancy problem with the
new space caused by different supervision losses. In this work, we propose a
global optimization Dual-Tuning method to obtain feature compatibility against
different networks and losses. A feature-level prototype loss is proposed to
explicitly align two types of embedding features, by transferring global
prototype information. Furthermore, we design a component-level mutual
structural regularization to implicitly optimize the feature intrinsic
structure. Experimental results on million-scale datasets demonstrate that our
Dual-Tuning is able to obtain feature compatibility without sacrificing
performance. (Our code will be avaliable at
https://github.com/yanbai1993/Dual-Tuning) |
---|---|
DOI: | 10.48550/arxiv.2108.02959 |