An automated behavior analysis system for freely moving rodents using depth image

A rodent behavior analysis system is presented, capable of automated tracking, pose estimation, and recognition of nine behaviors in freely moving animals. The system tracks three key points on the rodent body (nose, center of body, and base of tail) to estimate its pose and head rotation angle in r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical & biological engineering & computing 2018-10, Vol.56 (10), p.1807-1821
Hauptverfasser: Wang, Zheyuan, Mirbozorgi, S. Abdollah, Ghovanloo, Maysam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A rodent behavior analysis system is presented, capable of automated tracking, pose estimation, and recognition of nine behaviors in freely moving animals. The system tracks three key points on the rodent body (nose, center of body, and base of tail) to estimate its pose and head rotation angle in real time. A support vector machine (SVM)-based model, including label optimization steps, is trained to classify on a frame-by-frame basis: resting, walking, bending, grooming, sniffing, rearing supported, rearing unsupported, micro-movements, and “other” behaviors. Compared to conventional red-green-blue (RGB) camera-based methods, the proposed system operates on 3D depth images provided by the Kinect infrared (IR) camera, enabling stable performance regardless of lighting conditions and animal color contrast with the background. This is particularly beneficial for monitoring nocturnal animals’ behavior. 3D features are designed to be extracted directly from the depth stream and combined with contour-based 2D features to further improve recognition accuracies. The system is validated on three freely behaving rats for 168 min in total. The behavior recognition model achieved a cross-validation accuracy of 86.8% on the rat used for training and accuracies of 82.1 and 83% on the other two “testing” rats. The automated head angle estimation aided by behavior recognition resulted in 0.76 correlation with human expert annotation. Graphical abstract Top view of a rat freely behaving in a standard homecage, captured by Kinect-v2 sensors. The depth image is used for constructing a 3D topography of the animal for pose estimation, behavior recognition, and head angle calculation. Results of the processed data are displayed on the user interface in various forms.
ISSN:0140-0118
1741-0444
DOI:10.1007/s11517-018-1816-1