Perceptual evaluation of multichannel synthesis of moving sounds as a function of rendering strategies and velocity
Sound-field synthesis for static sound sources has been extensively studied. Recently, dynamic sound sources synthesis has garnered increased attention. Classical sound-field rendering strategies discretize dynamic sound-fields as a sequence of stationary snapshots. The use of discrete multichannel...
Gespeichert in:
Veröffentlicht in: | The Journal of the Acoustical Society of America 2017-05, Vol.141 (5), p.3511-3511 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sound-field synthesis for static sound sources has been extensively studied. Recently, dynamic sound sources synthesis has garnered increased attention. Classical sound-field rendering strategies discretize dynamic sound-fields as a sequence of stationary snapshots. The use of discrete multichannel arrays can generate further artifacts with moving sounds. Depending on the technique used, this results in an amplitude modulation due to successive loudspeaker contributions (VBAP) or in multiple comb-filtering (WFS) which could affect localization cues, especially at off-centered listening positions. We first present a detailed description of these artifacts. We then introduce a hybrid rendering strategy combining propagation simulation and VBAP at audio rate. We used this rendering strategy and WFS to synthesize white noise revolving around listeners on a circular 48-loudspeaker array. On each trial, participants had to identify the trajectory (circle, triangle, or square) for velocities ranging from 0.5 to 2 revolutions per second. Performance was well above chance level in all conditions. While WFS outperformed the hybrid rendering strategy at low velocities, no significant differences were observed at high velocities for which participants relied on temporal cues rather than spatial cues. The results highlight how artifacts of the rendering strategies interfere with dynamic sound localization at different velocities. |
---|---|
ISSN: | 0001-4966 1520-8524 |
DOI: | 10.1121/1.4987364 |