Towards Open-World Gesture Recognition

Providing users with accurate gestural interfaces, such as gesture recognition based on wrist-worn devices, is a key challenge in mixed reality. However, static machine learning processes in gesture recognition assume that training and test data come from the same underlying distribution. Unfortunat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shen, Junxiao, De Lange, Matthias, Xu, Xuhai "Orson", Zhou, Enmin, Tan, Ran, Suda, Naveen, Lazarewicz, Maciej, Kristensson, Per Ola, Karlson, Amy, Strasnick, Evan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shen, Junxiao
De Lange, Matthias
Xu, Xuhai "Orson"
Zhou, Enmin
Tan, Ran
Suda, Naveen
Lazarewicz, Maciej
Kristensson, Per Ola
Karlson, Amy
Strasnick, Evan
description Providing users with accurate gestural interfaces, such as gesture recognition based on wrist-worn devices, is a key challenge in mixed reality. However, static machine learning processes in gesture recognition assume that training and test data come from the same underlying distribution. Unfortunately, in real-world applications involving gesture recognition, such as gesture recognition based on wrist-worn devices, the data distribution may change over time. We formulate this problem of adapting recognition models to new tasks, where new data patterns emerge, as open-world gesture recognition (OWGR). We propose the use of continual learning to enable machine learning models to be adaptive to new tasks without degrading performance on previously learned tasks. However, the process of exploring parameters for questions around when, and how, to train and deploy recognition models requires resource-intensive user studies may be impractical. To address this challenge, we propose a design engineering approach that enables offline analysis on a collected large-scale dataset by systematically examining various parameters and comparing different continual learning methods. Finally, we provide design guidelines to enhance the development of an open-world wrist-worn gesture recognition process.
doi_str_mv 10.48550/arxiv.2401.11144
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_11144</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_11144</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-9aef212ab53f4dcc5fa750b69822ef48d3f46d25ad86cdbf974b6bba034873cd3</originalsourceid><addsrcrecordid>eNotzjuLwkAYheFpLBb1B2xlKrvEuXwzmZQiqysIggQswzc3CcQkTFx1__2ul-rAWxweQj4ZzUBLSRcY7_U140BZxhgD-CDzsrthdEOy732bHrvYuGTjh8tP9MnB2-7U1pe6aydkFLAZ_PS9Y1Kuv8rVd7rbb7ar5S5FlUNaoA-ccTRSBHDWyoC5pEYVmnMfQLv_rByX6LSyzoQiB6OMQSpA58I6MSaz1-0TWvWxPmP8rR7g6gkWf4M_OvQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards Open-World Gesture Recognition</title><source>arXiv.org</source><creator>Shen, Junxiao ; De Lange, Matthias ; Xu, Xuhai "Orson" ; Zhou, Enmin ; Tan, Ran ; Suda, Naveen ; Lazarewicz, Maciej ; Kristensson, Per Ola ; Karlson, Amy ; Strasnick, Evan</creator><creatorcontrib>Shen, Junxiao ; De Lange, Matthias ; Xu, Xuhai "Orson" ; Zhou, Enmin ; Tan, Ran ; Suda, Naveen ; Lazarewicz, Maciej ; Kristensson, Per Ola ; Karlson, Amy ; Strasnick, Evan</creatorcontrib><description>Providing users with accurate gestural interfaces, such as gesture recognition based on wrist-worn devices, is a key challenge in mixed reality. However, static machine learning processes in gesture recognition assume that training and test data come from the same underlying distribution. Unfortunately, in real-world applications involving gesture recognition, such as gesture recognition based on wrist-worn devices, the data distribution may change over time. We formulate this problem of adapting recognition models to new tasks, where new data patterns emerge, as open-world gesture recognition (OWGR). We propose the use of continual learning to enable machine learning models to be adaptive to new tasks without degrading performance on previously learned tasks. However, the process of exploring parameters for questions around when, and how, to train and deploy recognition models requires resource-intensive user studies may be impractical. To address this challenge, we propose a design engineering approach that enables offline analysis on a collected large-scale dataset by systematically examining various parameters and comparing different continual learning methods. Finally, we provide design guidelines to enhance the development of an open-world wrist-worn gesture recognition process.</description><identifier>DOI: 10.48550/arxiv.2401.11144</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.11144$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.11144$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shen, Junxiao</creatorcontrib><creatorcontrib>De Lange, Matthias</creatorcontrib><creatorcontrib>Xu, Xuhai "Orson"</creatorcontrib><creatorcontrib>Zhou, Enmin</creatorcontrib><creatorcontrib>Tan, Ran</creatorcontrib><creatorcontrib>Suda, Naveen</creatorcontrib><creatorcontrib>Lazarewicz, Maciej</creatorcontrib><creatorcontrib>Kristensson, Per Ola</creatorcontrib><creatorcontrib>Karlson, Amy</creatorcontrib><creatorcontrib>Strasnick, Evan</creatorcontrib><title>Towards Open-World Gesture Recognition</title><description>Providing users with accurate gestural interfaces, such as gesture recognition based on wrist-worn devices, is a key challenge in mixed reality. However, static machine learning processes in gesture recognition assume that training and test data come from the same underlying distribution. Unfortunately, in real-world applications involving gesture recognition, such as gesture recognition based on wrist-worn devices, the data distribution may change over time. We formulate this problem of adapting recognition models to new tasks, where new data patterns emerge, as open-world gesture recognition (OWGR). We propose the use of continual learning to enable machine learning models to be adaptive to new tasks without degrading performance on previously learned tasks. However, the process of exploring parameters for questions around when, and how, to train and deploy recognition models requires resource-intensive user studies may be impractical. To address this challenge, we propose a design engineering approach that enables offline analysis on a collected large-scale dataset by systematically examining various parameters and comparing different continual learning methods. Finally, we provide design guidelines to enhance the development of an open-world wrist-worn gesture recognition process.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzjuLwkAYheFpLBb1B2xlKrvEuXwzmZQiqysIggQswzc3CcQkTFx1__2ul-rAWxweQj4ZzUBLSRcY7_U140BZxhgD-CDzsrthdEOy732bHrvYuGTjh8tP9MnB2-7U1pe6aydkFLAZ_PS9Y1Kuv8rVd7rbb7ar5S5FlUNaoA-ccTRSBHDWyoC5pEYVmnMfQLv_rByX6LSyzoQiB6OMQSpA58I6MSaz1-0TWvWxPmP8rR7g6gkWf4M_OvQ</recordid><startdate>20240120</startdate><enddate>20240120</enddate><creator>Shen, Junxiao</creator><creator>De Lange, Matthias</creator><creator>Xu, Xuhai "Orson"</creator><creator>Zhou, Enmin</creator><creator>Tan, Ran</creator><creator>Suda, Naveen</creator><creator>Lazarewicz, Maciej</creator><creator>Kristensson, Per Ola</creator><creator>Karlson, Amy</creator><creator>Strasnick, Evan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240120</creationdate><title>Towards Open-World Gesture Recognition</title><author>Shen, Junxiao ; De Lange, Matthias ; Xu, Xuhai "Orson" ; Zhou, Enmin ; Tan, Ran ; Suda, Naveen ; Lazarewicz, Maciej ; Kristensson, Per Ola ; Karlson, Amy ; Strasnick, Evan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-9aef212ab53f4dcc5fa750b69822ef48d3f46d25ad86cdbf974b6bba034873cd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Shen, Junxiao</creatorcontrib><creatorcontrib>De Lange, Matthias</creatorcontrib><creatorcontrib>Xu, Xuhai "Orson"</creatorcontrib><creatorcontrib>Zhou, Enmin</creatorcontrib><creatorcontrib>Tan, Ran</creatorcontrib><creatorcontrib>Suda, Naveen</creatorcontrib><creatorcontrib>Lazarewicz, Maciej</creatorcontrib><creatorcontrib>Kristensson, Per Ola</creatorcontrib><creatorcontrib>Karlson, Amy</creatorcontrib><creatorcontrib>Strasnick, Evan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shen, Junxiao</au><au>De Lange, Matthias</au><au>Xu, Xuhai "Orson"</au><au>Zhou, Enmin</au><au>Tan, Ran</au><au>Suda, Naveen</au><au>Lazarewicz, Maciej</au><au>Kristensson, Per Ola</au><au>Karlson, Amy</au><au>Strasnick, Evan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Open-World Gesture Recognition</atitle><date>2024-01-20</date><risdate>2024</risdate><abstract>Providing users with accurate gestural interfaces, such as gesture recognition based on wrist-worn devices, is a key challenge in mixed reality. However, static machine learning processes in gesture recognition assume that training and test data come from the same underlying distribution. Unfortunately, in real-world applications involving gesture recognition, such as gesture recognition based on wrist-worn devices, the data distribution may change over time. We formulate this problem of adapting recognition models to new tasks, where new data patterns emerge, as open-world gesture recognition (OWGR). We propose the use of continual learning to enable machine learning models to be adaptive to new tasks without degrading performance on previously learned tasks. However, the process of exploring parameters for questions around when, and how, to train and deploy recognition models requires resource-intensive user studies may be impractical. To address this challenge, we propose a design engineering approach that enables offline analysis on a collected large-scale dataset by systematically examining various parameters and comparing different continual learning methods. Finally, we provide design guidelines to enhance the development of an open-world wrist-worn gesture recognition process.</abstract><doi>10.48550/arxiv.2401.11144</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2401.11144
ispartof
issn
language eng
recordid cdi_arxiv_primary_2401_11144
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Towards Open-World Gesture Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T05%3A11%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Open-World%20Gesture%20Recognition&rft.au=Shen,%20Junxiao&rft.date=2024-01-20&rft_id=info:doi/10.48550/arxiv.2401.11144&rft_dat=%3Carxiv_GOX%3E2401_11144%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true