Real-time auralization of a talker’s own voice in virtual rooms

While much has been done in the field of sound auralization in virtual rooms, the problem of hearing one’s own voice in these environments has received less attention. A robust and feasible system for real-time auralization of talkers who are also listeners is needed. To address this requirement, a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Journal of the Acoustical Society of America 2019-03, Vol.145 (3), p.1889-1889
Hauptverfasser: Whiting, Jennifer, Leishman, Timothy W., Neilsen, Traci
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:While much has been done in the field of sound auralization in virtual rooms, the problem of hearing one’s own voice in these environments has received less attention. A robust and feasible system for real-time auralization of talkers who are also listeners is needed. To address this requirement, a real-time convolution system (RTCS) was designed with the specific goal of “placing” a talker/listener in virtual acoustic environments. This system necessitated the development of several tools and methods. Oral-binaural room impulse responses were measured and characterized for a variety of room. The RTCS improved on past systems, in part through the derivation and inclusion of compensation filters, which corrected the linear auditory distortions of the RTCS components. Objective measures in the time- and frequency-domains were developed to assess the validity of the system. A jury-based listening study also indicated that RTCS users could speak and listen to their own voices in the virtual acoustic environments in a natural manner.
ISSN:0001-4966
1520-8524
DOI:10.1121/1.5101847