Vision- and tactile-based continuous multimodal intention and attention recognition for safer physical human-robot interaction
Employing skin-like tactile sensors on robots enhances both the safety and usability of collaborative robots by adding the capability to detect human contact. Unfortunately, simple binary tactile sensors alone cannot determine the context of the human contact -- whether it is a deliberate interactio...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wong, Christopher Yee Vergez, Lucas Suleiman, Wael |
description | Employing skin-like tactile sensors on robots enhances both the safety and
usability of collaborative robots by adding the capability to detect human
contact. Unfortunately, simple binary tactile sensors alone cannot determine
the context of the human contact -- whether it is a deliberate interaction or
an unintended collision that requires safety manoeuvres. Many published methods
classify discrete interactions using more advanced tactile sensors or by
analysing joint torques. Instead, we propose to augment the intention
recognition capabilities of simple binary tactile sensors by adding a
robot-mounted camera for human posture analysis. Different interaction
characteristics, including touch location, human pose, and gaze direction, are
used to train a supervised machine learning algorithm to classify whether a
touch is intentional or not with an F1-score of 86%. We demonstrate that
multimodal intention recognition is significantly more accurate than monomodal
analyses with the collaborative robot Baxter. Furthermore, our method can also
continuously monitor interactions that fluidly change between intentional or
unintentional by gauging the user's attention through gaze. If a user stops
paying attention mid-task, the proposed intention and attention recognition
algorithm can activate safety features to prevent unsafe interactions. We also
employ a feature reduction technique that reduces the number of inputs to five
to achieve a more generalized low-dimensional classifier. This simplification
both reduces the amount of training data required and improves real-world
classification accuracy. It also renders the method potentially agnostic to the
robot and touch sensor architectures while achieving a high degree of task
adaptability. |
doi_str_mv | 10.48550/arxiv.2206.11350 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2206_11350</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2206_11350</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-3b385cd02365fd82be8b66e7c2a82dac6ba86f23393d3e2dc8ebff7dccfe59a23</originalsourceid><addsrcrecordid>eNo1kM1OAyEUhdm4MNUHcCUvwMiAMHRpGv-SJm4at5PLnyWZgQYYYzc-uy3q6p57knNO8iF009PuXglB7yB_hc-OMSq7vueCXqLv91BCigRDtLiCqWFyRENxFpsUa4hLWgqel6mGOVmYcIjVnfwUWwLq_5edSR8xNO1TxgW8y_iwP5ZgTrH9MkMkOelUW0U-T6V4hS48TMVd_90V2j097jYvZPv2_Lp52BKQAyVccyWMpYxL4a1i2iktpRsMA8UsGKlBSc84X3PLHbNGOe39YI3xTqyB8RW6_a1tAMZDDjPk43gGMTYQ_AcGMV2i</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Vision- and tactile-based continuous multimodal intention and attention recognition for safer physical human-robot interaction</title><source>arXiv.org</source><creator>Wong, Christopher Yee ; Vergez, Lucas ; Suleiman, Wael</creator><creatorcontrib>Wong, Christopher Yee ; Vergez, Lucas ; Suleiman, Wael</creatorcontrib><description>Employing skin-like tactile sensors on robots enhances both the safety and
usability of collaborative robots by adding the capability to detect human
contact. Unfortunately, simple binary tactile sensors alone cannot determine
the context of the human contact -- whether it is a deliberate interaction or
an unintended collision that requires safety manoeuvres. Many published methods
classify discrete interactions using more advanced tactile sensors or by
analysing joint torques. Instead, we propose to augment the intention
recognition capabilities of simple binary tactile sensors by adding a
robot-mounted camera for human posture analysis. Different interaction
characteristics, including touch location, human pose, and gaze direction, are
used to train a supervised machine learning algorithm to classify whether a
touch is intentional or not with an F1-score of 86%. We demonstrate that
multimodal intention recognition is significantly more accurate than monomodal
analyses with the collaborative robot Baxter. Furthermore, our method can also
continuously monitor interactions that fluidly change between intentional or
unintentional by gauging the user's attention through gaze. If a user stops
paying attention mid-task, the proposed intention and attention recognition
algorithm can activate safety features to prevent unsafe interactions. We also
employ a feature reduction technique that reduces the number of inputs to five
to achieve a more generalized low-dimensional classifier. This simplification
both reduces the amount of training data required and improves real-world
classification accuracy. It also renders the method potentially agnostic to the
robot and touch sensor architectures while achieving a high degree of task
adaptability.</description><identifier>DOI: 10.48550/arxiv.2206.11350</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2022-06</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2206.11350$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.11350$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wong, Christopher Yee</creatorcontrib><creatorcontrib>Vergez, Lucas</creatorcontrib><creatorcontrib>Suleiman, Wael</creatorcontrib><title>Vision- and tactile-based continuous multimodal intention and attention recognition for safer physical human-robot interaction</title><description>Employing skin-like tactile sensors on robots enhances both the safety and
usability of collaborative robots by adding the capability to detect human
contact. Unfortunately, simple binary tactile sensors alone cannot determine
the context of the human contact -- whether it is a deliberate interaction or
an unintended collision that requires safety manoeuvres. Many published methods
classify discrete interactions using more advanced tactile sensors or by
analysing joint torques. Instead, we propose to augment the intention
recognition capabilities of simple binary tactile sensors by adding a
robot-mounted camera for human posture analysis. Different interaction
characteristics, including touch location, human pose, and gaze direction, are
used to train a supervised machine learning algorithm to classify whether a
touch is intentional or not with an F1-score of 86%. We demonstrate that
multimodal intention recognition is significantly more accurate than monomodal
analyses with the collaborative robot Baxter. Furthermore, our method can also
continuously monitor interactions that fluidly change between intentional or
unintentional by gauging the user's attention through gaze. If a user stops
paying attention mid-task, the proposed intention and attention recognition
algorithm can activate safety features to prevent unsafe interactions. We also
employ a feature reduction technique that reduces the number of inputs to five
to achieve a more generalized low-dimensional classifier. This simplification
both reduces the amount of training data required and improves real-world
classification accuracy. It also renders the method potentially agnostic to the
robot and touch sensor architectures while achieving a high degree of task
adaptability.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1kM1OAyEUhdm4MNUHcCUvwMiAMHRpGv-SJm4at5PLnyWZgQYYYzc-uy3q6p57knNO8iF009PuXglB7yB_hc-OMSq7vueCXqLv91BCigRDtLiCqWFyRENxFpsUa4hLWgqel6mGOVmYcIjVnfwUWwLq_5edSR8xNO1TxgW8y_iwP5ZgTrH9MkMkOelUW0U-T6V4hS48TMVd_90V2j097jYvZPv2_Lp52BKQAyVccyWMpYxL4a1i2iktpRsMA8UsGKlBSc84X3PLHbNGOe39YI3xTqyB8RW6_a1tAMZDDjPk43gGMTYQ_AcGMV2i</recordid><startdate>20220622</startdate><enddate>20220622</enddate><creator>Wong, Christopher Yee</creator><creator>Vergez, Lucas</creator><creator>Suleiman, Wael</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220622</creationdate><title>Vision- and tactile-based continuous multimodal intention and attention recognition for safer physical human-robot interaction</title><author>Wong, Christopher Yee ; Vergez, Lucas ; Suleiman, Wael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-3b385cd02365fd82be8b66e7c2a82dac6ba86f23393d3e2dc8ebff7dccfe59a23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Wong, Christopher Yee</creatorcontrib><creatorcontrib>Vergez, Lucas</creatorcontrib><creatorcontrib>Suleiman, Wael</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wong, Christopher Yee</au><au>Vergez, Lucas</au><au>Suleiman, Wael</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Vision- and tactile-based continuous multimodal intention and attention recognition for safer physical human-robot interaction</atitle><date>2022-06-22</date><risdate>2022</risdate><abstract>Employing skin-like tactile sensors on robots enhances both the safety and
usability of collaborative robots by adding the capability to detect human
contact. Unfortunately, simple binary tactile sensors alone cannot determine
the context of the human contact -- whether it is a deliberate interaction or
an unintended collision that requires safety manoeuvres. Many published methods
classify discrete interactions using more advanced tactile sensors or by
analysing joint torques. Instead, we propose to augment the intention
recognition capabilities of simple binary tactile sensors by adding a
robot-mounted camera for human posture analysis. Different interaction
characteristics, including touch location, human pose, and gaze direction, are
used to train a supervised machine learning algorithm to classify whether a
touch is intentional or not with an F1-score of 86%. We demonstrate that
multimodal intention recognition is significantly more accurate than monomodal
analyses with the collaborative robot Baxter. Furthermore, our method can also
continuously monitor interactions that fluidly change between intentional or
unintentional by gauging the user's attention through gaze. If a user stops
paying attention mid-task, the proposed intention and attention recognition
algorithm can activate safety features to prevent unsafe interactions. We also
employ a feature reduction technique that reduces the number of inputs to five
to achieve a more generalized low-dimensional classifier. This simplification
both reduces the amount of training data required and improves real-world
classification accuracy. It also renders the method potentially agnostic to the
robot and touch sensor architectures while achieving a high degree of task
adaptability.</abstract><doi>10.48550/arxiv.2206.11350</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2206.11350 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2206_11350 |
source | arXiv.org |
subjects | Computer Science - Robotics |
title | Vision- and tactile-based continuous multimodal intention and attention recognition for safer physical human-robot interaction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T11%3A53%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Vision-%20and%20tactile-based%20continuous%20multimodal%20intention%20and%20attention%20recognition%20for%20safer%20physical%20human-robot%20interaction&rft.au=Wong,%20Christopher%20Yee&rft.date=2022-06-22&rft_id=info:doi/10.48550/arxiv.2206.11350&rft_dat=%3Carxiv_GOX%3E2206_11350%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |