LG-Gaze: Learning Geometry-aware Continuous Prompts for Language-Guided Gaze Estimation
The ability of gaze estimation models to generalize is often significantly hindered by various factors unrelated to gaze, especially when the training dataset is limited. Current strategies aim to address this challenge through different domain generalization techniques, yet they have had limited su...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The ability of gaze estimation models to generalize is often significantly
hindered by various factors unrelated to gaze, especially when the training
dataset is limited. Current strategies aim to address this challenge through
different domain generalization techniques, yet they have had limited success
due to the risk of overfitting when solely relying on value labels for
regression. Recent progress in pre-trained vision-language models has motivated
us to capitalize on the abundant semantic information available. We propose a
novel approach in this paper, reframing the gaze estimation task as a
vision-language alignment issue. Our proposed framework, named Language-Guided
Gaze Estimation (LG-Gaze), learns continuous and geometry-sensitive features
for gaze estimation benefit from the rich prior knowledges of vision-language
models. Specifically, LG-Gaze aligns gaze features with continuous linguistic
features through our proposed multimodal contrastive regression loss, which
customizes adaptive weights for different negative samples. Furthermore, to
better adapt to the labels for gaze estimation task, we propose a
geometry-aware interpolation method to obtain more precise gaze embeddings.
Through extensive experiments, we validate the efficacy of our framework in
four different cross-domain evaluation tasks. |
---|---|
DOI: | 10.48550/arxiv.2411.08606 |