Vision- and Tactile-Based Continuous Multimodal Intention and Attention Recognition for Safer Physical Human-Robot Interaction

Employing skin-like tactile sensors on robots enhances both the safety and usability of collaborative robots by adding the capability to detect human contact. Unfortunately, simple binary tactile sensors alone cannot determine the context of the human contact-whether it is a deliberate interaction o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on automation science and engineering 2024-07, Vol.21 (3), p.3205-3215
Hauptverfasser: Wong, Christopher Yee, Vergez, Lucas, Suleiman, Wael
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Employing skin-like tactile sensors on robots enhances both the safety and usability of collaborative robots by adding the capability to detect human contact. Unfortunately, simple binary tactile sensors alone cannot determine the context of the human contact-whether it is a deliberate interaction or an unintended collision that requires safety manoeuvres. Many published methods classify discrete interactions using more advanced tactile sensors or by analysing joint torques. Instead, we propose to augment the intention recognition capabilities of simple binary tactile sensors by adding a robot-mounted camera for human posture analysis. Different interaction characteristics, including touch location, human pose, and gaze direction, are used to train a supervised machine learning algorithm to classify whether a touch is intentional or not with an F1-score of 86%. We demonstrate that multimodal intention recognition is significantly more accurate than monomodal analyses with the collaborative robot Baxter. Furthermore, our method can also continuously monitor interactions that fluidly change between intentional or unintentional by gauging the user's attention through gaze. If a user stops paying attention mid-task, the proposed intention and attention recognition algorithm can activate safety features to prevent unsafe interactions. We also employ a feature reduction technique that reduces the number of inputs to five to achieve a more generalized low-dimensional classifier. This simplification both reduces the amount of training data required and improves real-world classification accuracy. It also renders the method potentially agnostic to the robot and touch sensor architectures while achieving a high degree of task adaptability. Note to Practitioners-Whenever a user interacts physically with a robot, such as in collaborative manufacturing, the robot may respond to unintended touch inputs from the user. This may be through body collisions or that the user is suddenly distracted and is no longer paying attention to what they are doing. We propose an easy-to-implement method to augment safety of physical human-robot collaboration by determining whether the touch from a user is intentional or not through the use of robot-mounted basic touch sensors and computer vision. The algorithm examines the location of the user's hands relative to the touched sensors in addition to observing where the user is looking. Machine learning is then used to classify in real-tim
ISSN:1545-5955
1558-3783
DOI:10.1109/TASE.2023.3276856