Motion vectors and deep neural networks for video camera traps

Commercial camera traps are usually triggered by a Passive Infra-Red (PIR) motion sensor necessitating a delay between triggering and the image being captured. This often seriously limits the ability to record images of small and fast moving animals. It also results in many “empty” images, e.g., owi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Ecological informatics 2022-07, Vol.69, p.101657, Article 101657
Hauptverfasser: Riechmann, Miklas, Gardiner, Ross, Waddington, Kai, Rueger, Ryan, Leymarie, Frederic Fol, Rueger, Stefan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Commercial camera traps are usually triggered by a Passive Infra-Red (PIR) motion sensor necessitating a delay between triggering and the image being captured. This often seriously limits the ability to record images of small and fast moving animals. It also results in many “empty” images, e.g., owing to moving foliage against a background of different temperature. In this paper we detail a new triggering mechanism based solely on the camera sensor. This is intended for use by citizen scientists and for deployment on an affordable, compact, low-power Raspberry Pi computer (RPi). Our system introduces a video frame filtering pipeline consisting of movement and image-based processing. This makes use of Machine Learning (ML) feasible on a live camera stream on an RPi. We describe our free and open-source software implementation of the system; introduce a suitable ecology efficiency measure that mediates between specificity and recall; provide ground-truth for a video clip collection from camera traps; and evaluate the effectiveness of our system thoroughly. Overall, our video camera trap turns out to be robust and effective. •We introduce a new triggering mechanism based solely on a standard camera sensor.•A real-time video frame filtering is the 1st stage of movement and image processing.•On selected frames, a second animal detection stage uses a deep learning model.•The entire pipeline runs on an inexpensive, low-power Raspberry Pi computer.•We make our software open source, and provide complete online documentation.
ISSN:1574-9541
DOI:10.1016/j.ecoinf.2022.101657