Few-Shot Classification of Interactive Activities of Daily Living (InteractADL)
Understanding Activities of Daily Living (ADLs) is a crucial step for different applications including assistive robots, smart homes, and healthcare. However, to date, few benchmarks and methods have focused on complex ADLs, especially those involving multi-person interactions in home environments....
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Understanding Activities of Daily Living (ADLs) is a crucial step for
different applications including assistive robots, smart homes, and healthcare.
However, to date, few benchmarks and methods have focused on complex ADLs,
especially those involving multi-person interactions in home environments. In
this paper, we propose a new dataset and benchmark, InteractADL, for
understanding complex ADLs that involve interaction between humans (and
objects). Furthermore, complex ADLs occurring in home environments comprise a
challenging long-tailed distribution due to the rarity of multi-person
interactions, and pose fine-grained visual recognition tasks due to the
presence of semantically and visually similar classes. To address these issues,
we propose a novel method for fine-grained few-shot video classification called
Name Tuning that enables greater semantic separability by learning optimal
class name vectors. We show that Name Tuning can be combined with existing
prompt tuning strategies to learn the entire input text (rather than only
learning the prompt or class names) and demonstrate improved performance for
few-shot classification on InteractADL and 4 other fine-grained visual
classification benchmarks. For transparency and reproducibility, we release our
code at https://github.com/zanedurante/vlm_benchmark. |
---|---|
DOI: | 10.48550/arxiv.2406.01662 |