Do Pedestrians Pay Attention? Eye Contact Detection in the Wild

In urban or crowded environments, humans rely on eye contact for fast and efficient communication with nearby people. Autonomous agents also need to detect eye contact to interact with pedestrians and safely navigate around them. In this paper, we focus on eye contact detection in the wild, i.e., re...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-12
Hauptverfasser: Younes Belkada, Bertoni, Lorenzo, Caristan, Romain, Taylor Mordan, Alahi, Alexandre
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Younes Belkada
Bertoni, Lorenzo
Caristan, Romain
Taylor Mordan
Alahi, Alexandre
description In urban or crowded environments, humans rely on eye contact for fast and efficient communication with nearby people. Autonomous agents also need to detect eye contact to interact with pedestrians and safely navigate around them. In this paper, we focus on eye contact detection in the wild, i.e., real-world scenarios for autonomous vehicles with no control over the environment or the distance of pedestrians. We introduce a model that leverages semantic keypoints to detect eye contact and show that this high-level representation (i) achieves state-of-the-art results on the publicly-available dataset JAAD, and (ii) conveys better generalization properties than leveraging raw images in an end-to-end network. To study domain adaptation, we create LOOK: a large-scale dataset for eye contact detection in the wild, which focuses on diverse and unconstrained scenarios for real-world generalization. The source code and the LOOK dataset are publicly shared towards an open science mission.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2608261911</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2608261911</sourcerecordid><originalsourceid>FETCH-proquest_journals_26082619113</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwd8lXCEhNSS0uKcpMzCtWCEisVHAsKUnNK8nMz7NXcK1MVXDOzytJTC5RcEktSU0GCStk5imUZKQqhGfmpPAwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wlKgsfFZ-aVFeUCpeCMzAwsjM0NLQ0Nj4lQBAIfIN7Y</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2608261911</pqid></control><display><type>article</type><title>Do Pedestrians Pay Attention? Eye Contact Detection in the Wild</title><source>Free E- Journals</source><creator>Younes Belkada ; Bertoni, Lorenzo ; Caristan, Romain ; Taylor Mordan ; Alahi, Alexandre</creator><creatorcontrib>Younes Belkada ; Bertoni, Lorenzo ; Caristan, Romain ; Taylor Mordan ; Alahi, Alexandre</creatorcontrib><description>In urban or crowded environments, humans rely on eye contact for fast and efficient communication with nearby people. Autonomous agents also need to detect eye contact to interact with pedestrians and safely navigate around them. In this paper, we focus on eye contact detection in the wild, i.e., real-world scenarios for autonomous vehicles with no control over the environment or the distance of pedestrians. We introduce a model that leverages semantic keypoints to detect eye contact and show that this high-level representation (i) achieves state-of-the-art results on the publicly-available dataset JAAD, and (ii) conveys better generalization properties than leveraging raw images in an end-to-end network. To study domain adaptation, we create LOOK: a large-scale dataset for eye contact detection in the wild, which focuses on diverse and unconstrained scenarios for real-world generalization. The source code and the LOOK dataset are publicly shared towards an open science mission.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Eye contact ; Pedestrians ; Source code</subject><ispartof>arXiv.org, 2021-12</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Younes Belkada</creatorcontrib><creatorcontrib>Bertoni, Lorenzo</creatorcontrib><creatorcontrib>Caristan, Romain</creatorcontrib><creatorcontrib>Taylor Mordan</creatorcontrib><creatorcontrib>Alahi, Alexandre</creatorcontrib><title>Do Pedestrians Pay Attention? Eye Contact Detection in the Wild</title><title>arXiv.org</title><description>In urban or crowded environments, humans rely on eye contact for fast and efficient communication with nearby people. Autonomous agents also need to detect eye contact to interact with pedestrians and safely navigate around them. In this paper, we focus on eye contact detection in the wild, i.e., real-world scenarios for autonomous vehicles with no control over the environment or the distance of pedestrians. We introduce a model that leverages semantic keypoints to detect eye contact and show that this high-level representation (i) achieves state-of-the-art results on the publicly-available dataset JAAD, and (ii) conveys better generalization properties than leveraging raw images in an end-to-end network. To study domain adaptation, we create LOOK: a large-scale dataset for eye contact detection in the wild, which focuses on diverse and unconstrained scenarios for real-world generalization. The source code and the LOOK dataset are publicly shared towards an open science mission.</description><subject>Datasets</subject><subject>Eye contact</subject><subject>Pedestrians</subject><subject>Source code</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwd8lXCEhNSS0uKcpMzCtWCEisVHAsKUnNK8nMz7NXcK1MVXDOzytJTC5RcEktSU0GCStk5imUZKQqhGfmpPAwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wlKgsfFZ-aVFeUCpeCMzAwsjM0NLQ0Nj4lQBAIfIN7Y</recordid><startdate>20211208</startdate><enddate>20211208</enddate><creator>Younes Belkada</creator><creator>Bertoni, Lorenzo</creator><creator>Caristan, Romain</creator><creator>Taylor Mordan</creator><creator>Alahi, Alexandre</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211208</creationdate><title>Do Pedestrians Pay Attention? Eye Contact Detection in the Wild</title><author>Younes Belkada ; Bertoni, Lorenzo ; Caristan, Romain ; Taylor Mordan ; Alahi, Alexandre</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26082619113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Datasets</topic><topic>Eye contact</topic><topic>Pedestrians</topic><topic>Source code</topic><toplevel>online_resources</toplevel><creatorcontrib>Younes Belkada</creatorcontrib><creatorcontrib>Bertoni, Lorenzo</creatorcontrib><creatorcontrib>Caristan, Romain</creatorcontrib><creatorcontrib>Taylor Mordan</creatorcontrib><creatorcontrib>Alahi, Alexandre</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Younes Belkada</au><au>Bertoni, Lorenzo</au><au>Caristan, Romain</au><au>Taylor Mordan</au><au>Alahi, Alexandre</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Do Pedestrians Pay Attention? Eye Contact Detection in the Wild</atitle><jtitle>arXiv.org</jtitle><date>2021-12-08</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>In urban or crowded environments, humans rely on eye contact for fast and efficient communication with nearby people. Autonomous agents also need to detect eye contact to interact with pedestrians and safely navigate around them. In this paper, we focus on eye contact detection in the wild, i.e., real-world scenarios for autonomous vehicles with no control over the environment or the distance of pedestrians. We introduce a model that leverages semantic keypoints to detect eye contact and show that this high-level representation (i) achieves state-of-the-art results on the publicly-available dataset JAAD, and (ii) conveys better generalization properties than leveraging raw images in an end-to-end network. To study domain adaptation, we create LOOK: a large-scale dataset for eye contact detection in the wild, which focuses on diverse and unconstrained scenarios for real-world generalization. The source code and the LOOK dataset are publicly shared towards an open science mission.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2608261911
source Free E- Journals
subjects Datasets
Eye contact
Pedestrians
Source code
title Do Pedestrians Pay Attention? Eye Contact Detection in the Wild
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T13%3A21%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Do%20Pedestrians%20Pay%20Attention?%20Eye%20Contact%20Detection%20in%20the%20Wild&rft.jtitle=arXiv.org&rft.au=Younes%20Belkada&rft.date=2021-12-08&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2608261911%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2608261911&rft_id=info:pmid/&rfr_iscdi=true