Modelling individual head‐related transfer function (HRTF) based on anthropometric parameters and generic HRTF amplitudes
The head‐related transfer function (HRTF) plays a vital role in immersive virtual reality and augmented reality technologies, especially in spatial audio synthesis for binaural reproduction. This article proposes a deep learning method with generic HRTF amplitudes and anthropometric parameters as in...
Gespeichert in:
Veröffentlicht in: | CAAI Transactions on Intelligence Technology 2023-06, Vol.8 (2), p.364-378 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The head‐related transfer function (HRTF) plays a vital role in immersive virtual reality and augmented reality technologies, especially in spatial audio synthesis for binaural reproduction. This article proposes a deep learning method with generic HRTF amplitudes and anthropometric parameters as input features for individual HRTF generation. By designing fully convolutional neural networks, the key anthropometric parameters and the generic HRTF amplitudes were used to predict each individual HRTF amplitude spectrum in the full‐space directions, and the interaural time delay (ITD) was predicted by the transformer module. In the amplitude prediction model, the attention mechanism was adopted to better capture the relationship of HRTF amplitude spectra at two distinctive directions with large angle differences in space. Finally, with the minimum phase model, the predicted amplitude spectrum and ITDs were used to obtain a set of individual head‐related impulse responses. Besides the separate training of the HRTF amplitude and ITD generation models, their joint training was also considered and evaluated. The root‐mean‐square error and the log‐spectral distortion were selected as objective measurement metrics to evaluate the performance. Subjective experiments further showed that the auditory source localisation performance of the proposed method was better than other methods in most cases. |
---|---|
ISSN: | 2468-2322 2468-2322 |
DOI: | 10.1049/cit2.12196 |