Testing Human-Hand Segmentation on In-Distribution and Out-of-Distribution Data in Human-Robot Interactions Using a Deep Ensemble Model
Reliable detection and segmentation of human hands are critical for enhancing safety and facilitating advanced interactions in human-robot collaboration. Current research predominantly evaluates hand segmentation under in-distribution (ID) data, which reflects the training data of deep learning (DL)...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reliable detection and segmentation of human hands are critical for enhancing
safety and facilitating advanced interactions in human-robot collaboration.
Current research predominantly evaluates hand segmentation under
in-distribution (ID) data, which reflects the training data of deep learning
(DL) models. However, this approach fails to address out-of-distribution (OOD)
scenarios that often arise in real-world human-robot interactions. In this
study, we present a novel approach by evaluating the performance of pre-trained
DL models under both ID data and more challenging OOD scenarios. To mimic
realistic industrial scenarios, we designed a diverse dataset featuring simple
and cluttered backgrounds with industrial tools, varying numbers of hands (0 to
4), and hands with and without gloves. For OOD scenarios, we incorporated
unique and rare conditions such as finger-crossing gestures and motion blur
from fast-moving hands, addressing both epistemic and aleatoric uncertainties.
To ensure multiple point of views (PoVs), we utilized both egocentric cameras,
mounted on the operator's head, and static cameras to capture RGB images of
human-robot interactions. This approach allowed us to account for multiple
camera perspectives while also evaluating the performance of models trained on
existing egocentric datasets as well as static-camera datasets. For
segmentation, we used a deep ensemble model composed of UNet and RefineNet as
base learners. Performance evaluation was conducted using segmentation metrics
and uncertainty quantification via predictive entropy. Results revealed that
models trained on industrial datasets outperformed those trained on
non-industrial datasets, highlighting the importance of context-specific
training. Although all models struggled with OOD scenarios, those trained on
industrial datasets demonstrated significantly better generalization. |
---|---|
DOI: | 10.48550/arxiv.2501.07713 |