MABe22: A Multi-Species Multi-Task Benchmark for Learned Representations of Behavior
We introduce MABe22, a large-scale, multi-agent video and trajectory benchmark to assess the quality of learned behavior representations. This dataset is collected from a variety of biology experiments, and includes triplets of interacting mice (4.7 million frames video+pose tracking data, 10 millio...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce MABe22, a large-scale, multi-agent video and trajectory
benchmark to assess the quality of learned behavior representations. This
dataset is collected from a variety of biology experiments, and includes
triplets of interacting mice (4.7 million frames video+pose tracking data, 10
million frames pose only), symbiotic beetle-ant interactions (10 million frames
video data), and groups of interacting flies (4.4 million frames of pose
tracking data). Accompanying these data, we introduce a panel of real-life
downstream analysis tasks to assess the quality of learned representations by
evaluating how well they preserve information about the experimental conditions
(e.g. strain, time of day, optogenetic stimulation) and animal behavior. We
test multiple state-of-the-art self-supervised video and trajectory
representation learning methods to demonstrate the use of our benchmark,
revealing that methods developed using human action datasets do not fully
translate to animal datasets. We hope that our benchmark and dataset encourage
a broader exploration of behavior representation learning methods across
species and settings. |
---|---|
DOI: | 10.48550/arxiv.2207.10553 |