LG-Gaze: Learning Geometry-aware Continuous Prompts for Language-Guided Gaze Estimation

The ability of gaze estimation models to generalize is often significantly hindered by various factors unrelated to gaze, especially when the training dataset is limited. Current strategies aim to address this challenge through different domain generalization techniques, yet they have had limited su...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-11
Hauptverfasser: Yin, Pengwei, Wang, Jingjing, Zeng, Guanzhong, Xie, Di, Zhu, Jiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Yin, Pengwei
Wang, Jingjing
Zeng, Guanzhong
Xie, Di
Zhu, Jiang
description The ability of gaze estimation models to generalize is often significantly hindered by various factors unrelated to gaze, especially when the training dataset is limited. Current strategies aim to address this challenge through different domain generalization techniques, yet they have had limited success due to the risk of overfitting when solely relying on value labels for regression. Recent progress in pre-trained vision-language models has motivated us to capitalize on the abundant semantic information available. We propose a novel approach in this paper, reframing the gaze estimation task as a vision-language alignment issue. Our proposed framework, named Language-Guided Gaze Estimation (LG-Gaze), learns continuous and geometry-sensitive features for gaze estimation benefit from the rich prior knowledges of vision-language models. Specifically, LG-Gaze aligns gaze features with continuous linguistic features through our proposed multimodal contrastive regression loss, which customizes adaptive weights for different negative samples. Furthermore, to better adapt to the labels for gaze estimation task, we propose a geometry-aware interpolation method to obtain more precise gaze embeddings. Through extensive experiments, we validate the efficacy of our framework in four different cross-domain evaluation tasks.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3128429164</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3128429164</sourcerecordid><originalsourceid>FETCH-proquest_journals_31284291643</originalsourceid><addsrcrecordid>eNqNjrsKwjAYRoMgWLTvEHAOtElbL2upcejgIDiWH_q3pNik5oLo01vBB3D6hnM4fAsScSFSts84X5HYuSFJEl7seJ6LiNxqySS88UhrBKuV7qlEM6K3LwZPsEhLo73SwQRHL9aMk3e0M5bWoPsAPTIZVIst_UZo5bwawSujN2TZwd1h_Ns12Z6qa3lmkzWPgM43gwlWz6gRKZ-_HdIiE_9ZHwJXQSA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3128429164</pqid></control><display><type>article</type><title>LG-Gaze: Learning Geometry-aware Continuous Prompts for Language-Guided Gaze Estimation</title><source>Free E- Journals</source><creator>Yin, Pengwei ; Wang, Jingjing ; Zeng, Guanzhong ; Xie, Di ; Zhu, Jiang</creator><creatorcontrib>Yin, Pengwei ; Wang, Jingjing ; Zeng, Guanzhong ; Xie, Di ; Zhu, Jiang</creatorcontrib><description>The ability of gaze estimation models to generalize is often significantly hindered by various factors unrelated to gaze, especially when the training dataset is limited. Current strategies aim to address this challenge through different domain generalization techniques, yet they have had limited success due to the risk of overfitting when solely relying on value labels for regression. Recent progress in pre-trained vision-language models has motivated us to capitalize on the abundant semantic information available. We propose a novel approach in this paper, reframing the gaze estimation task as a vision-language alignment issue. Our proposed framework, named Language-Guided Gaze Estimation (LG-Gaze), learns continuous and geometry-sensitive features for gaze estimation benefit from the rich prior knowledges of vision-language models. Specifically, LG-Gaze aligns gaze features with continuous linguistic features through our proposed multimodal contrastive regression loss, which customizes adaptive weights for different negative samples. Furthermore, to better adapt to the labels for gaze estimation task, we propose a geometry-aware interpolation method to obtain more precise gaze embeddings. Through extensive experiments, we validate the efficacy of our framework in four different cross-domain evaluation tasks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptive sampling ; Geometry ; Labels ; Language</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Yin, Pengwei</creatorcontrib><creatorcontrib>Wang, Jingjing</creatorcontrib><creatorcontrib>Zeng, Guanzhong</creatorcontrib><creatorcontrib>Xie, Di</creatorcontrib><creatorcontrib>Zhu, Jiang</creatorcontrib><title>LG-Gaze: Learning Geometry-aware Continuous Prompts for Language-Guided Gaze Estimation</title><title>arXiv.org</title><description>The ability of gaze estimation models to generalize is often significantly hindered by various factors unrelated to gaze, especially when the training dataset is limited. Current strategies aim to address this challenge through different domain generalization techniques, yet they have had limited success due to the risk of overfitting when solely relying on value labels for regression. Recent progress in pre-trained vision-language models has motivated us to capitalize on the abundant semantic information available. We propose a novel approach in this paper, reframing the gaze estimation task as a vision-language alignment issue. Our proposed framework, named Language-Guided Gaze Estimation (LG-Gaze), learns continuous and geometry-sensitive features for gaze estimation benefit from the rich prior knowledges of vision-language models. Specifically, LG-Gaze aligns gaze features with continuous linguistic features through our proposed multimodal contrastive regression loss, which customizes adaptive weights for different negative samples. Furthermore, to better adapt to the labels for gaze estimation task, we propose a geometry-aware interpolation method to obtain more precise gaze embeddings. Through extensive experiments, we validate the efficacy of our framework in four different cross-domain evaluation tasks.</description><subject>Adaptive sampling</subject><subject>Geometry</subject><subject>Labels</subject><subject>Language</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjrsKwjAYRoMgWLTvEHAOtElbL2upcejgIDiWH_q3pNik5oLo01vBB3D6hnM4fAsScSFSts84X5HYuSFJEl7seJ6LiNxqySS88UhrBKuV7qlEM6K3LwZPsEhLo73SwQRHL9aMk3e0M5bWoPsAPTIZVIst_UZo5bwawSujN2TZwd1h_Ns12Z6qa3lmkzWPgM43gwlWz6gRKZ-_HdIiE_9ZHwJXQSA</recordid><startdate>20241113</startdate><enddate>20241113</enddate><creator>Yin, Pengwei</creator><creator>Wang, Jingjing</creator><creator>Zeng, Guanzhong</creator><creator>Xie, Di</creator><creator>Zhu, Jiang</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241113</creationdate><title>LG-Gaze: Learning Geometry-aware Continuous Prompts for Language-Guided Gaze Estimation</title><author>Yin, Pengwei ; Wang, Jingjing ; Zeng, Guanzhong ; Xie, Di ; Zhu, Jiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31284291643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptive sampling</topic><topic>Geometry</topic><topic>Labels</topic><topic>Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Yin, Pengwei</creatorcontrib><creatorcontrib>Wang, Jingjing</creatorcontrib><creatorcontrib>Zeng, Guanzhong</creatorcontrib><creatorcontrib>Xie, Di</creatorcontrib><creatorcontrib>Zhu, Jiang</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yin, Pengwei</au><au>Wang, Jingjing</au><au>Zeng, Guanzhong</au><au>Xie, Di</au><au>Zhu, Jiang</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>LG-Gaze: Learning Geometry-aware Continuous Prompts for Language-Guided Gaze Estimation</atitle><jtitle>arXiv.org</jtitle><date>2024-11-13</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>The ability of gaze estimation models to generalize is often significantly hindered by various factors unrelated to gaze, especially when the training dataset is limited. Current strategies aim to address this challenge through different domain generalization techniques, yet they have had limited success due to the risk of overfitting when solely relying on value labels for regression. Recent progress in pre-trained vision-language models has motivated us to capitalize on the abundant semantic information available. We propose a novel approach in this paper, reframing the gaze estimation task as a vision-language alignment issue. Our proposed framework, named Language-Guided Gaze Estimation (LG-Gaze), learns continuous and geometry-sensitive features for gaze estimation benefit from the rich prior knowledges of vision-language models. Specifically, LG-Gaze aligns gaze features with continuous linguistic features through our proposed multimodal contrastive regression loss, which customizes adaptive weights for different negative samples. Furthermore, to better adapt to the labels for gaze estimation task, we propose a geometry-aware interpolation method to obtain more precise gaze embeddings. Through extensive experiments, we validate the efficacy of our framework in four different cross-domain evaluation tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_3128429164
source Free E- Journals
subjects Adaptive sampling
Geometry
Labels
Language
title LG-Gaze: Learning Geometry-aware Continuous Prompts for Language-Guided Gaze Estimation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T17%3A37%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=LG-Gaze:%20Learning%20Geometry-aware%20Continuous%20Prompts%20for%20Language-Guided%20Gaze%20Estimation&rft.jtitle=arXiv.org&rft.au=Yin,%20Pengwei&rft.date=2024-11-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3128429164%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3128429164&rft_id=info:pmid/&rfr_iscdi=true