Utilizing synthetic training data for the supervised classification of rat ultrasonic vocalizations

Murine rodents generate ultrasonic vocalizations (USVs) with frequencies that extend to around 120 kHz. These calls are important in social behaviour, and so their analysis can provide insights into the function of vocal communication, and its dysfunction. The manual identification of USVs, and subs...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Journal of the Acoustical Society of America 2024-01, Vol.155 (1), p.306-314
Hauptverfasser: Scott, K. Jack, Speers, Lucinda J., Bilkey, David K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Murine rodents generate ultrasonic vocalizations (USVs) with frequencies that extend to around 120 kHz. These calls are important in social behaviour, and so their analysis can provide insights into the function of vocal communication, and its dysfunction. The manual identification of USVs, and subsequent classification into different subcategories is time consuming. Although machine learning approaches for identification and classification can lead to enormous efficiency gains, the time and effort required to generate training data can be high, and the accuracy of current approaches can be problematic. Here, we compare the detection and classification performance of a trained human against two convolutional neural networks (CNNs), DeepSqueak (DS) and VocalMat (VM), on audio containing rat USVs. Furthermore, we test the effect of inserting synthetic USVs into the training data of the VM CNN as a means of reducing the workload associated with generating a training set. Our results indicate that VM outperformed the DS CNN on measures of call identification, and classification. Additionally, we found that the augmentation of training data with synthetic images resulted in a further improvement in accuracy, such that it was sufficiently close to human performance to allow for the use of this software in laboratory conditions.
ISSN:0001-4966
1520-8524
DOI:10.1121/10.0024340