CL-HOI: Cross-Level Human-Object Interaction Distillation from Vision Large Language Models
Human-object interaction (HOI) detection has seen advancements with Vision Language Models (VLMs), but these methods often depend on extensive manual annotations. Vision Large Language Models (VLLMs) can inherently recognize and reason about interactions at the image level but are computationally he...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Human-object interaction (HOI) detection has seen advancements with Vision
Language Models (VLMs), but these methods often depend on extensive manual
annotations. Vision Large Language Models (VLLMs) can inherently recognize and
reason about interactions at the image level but are computationally heavy and
not designed for instance-level HOI detection. To overcome these limitations,
we propose a Cross-Level HOI distillation (CL-HOI) framework, which distills
instance-level HOIs from VLLMs image-level understanding without the need for
manual annotations. Our approach involves two stages: context distillation,
where a Visual Linguistic Translator (VLT) converts visual information into
linguistic form, and interaction distillation, where an Interaction Cognition
Network (ICN) reasons about spatial, visual, and context relations. We design
contrastive distillation losses to transfer image-level context and interaction
knowledge from the teacher to the student model, enabling instance-level HOI
detection. Evaluations on HICO-DET and V-COCO datasets demonstrate that our
CL-HOI surpasses existing weakly supervised methods and VLLM supervised
methods, showing its efficacy in detecting HOIs without manual labels. |
---|---|
DOI: | 10.48550/arxiv.2410.15657 |