A Computational Model to Implement Binaural Synthesis in a Hard Real-Time Auditory Virtual Environment

Research paper by Fabián C. Tommasini, Oscar A. Ramos, Mercedes X. Hüg, Sebastián P. Ferreyra

Indexed on: 19 Mar '19Published on: 08 Mar '19Published in: Acoustics Australia


There is a growing interest in the development and the evaluation of real-time auditory virtual environments (AVE). The implementation of this type of simulation system in general purpose computers is a still a challenge, and there are few studies that evaluated the perceived quality of synthetized sounds of simulated acoustic scenes. To evoke in the listener a correct image of the modeling space, the system must be dynamic and interactive. That is, it must respond to the changes in the acoustic scenario produced by the listener movement, in a perceptually acceptable time and with an update rate that guarantees continuity in the reproduction of sound events. Hard real-time systems ensure that a given task runs within a given time interval, providing deterministic behavior for applications with time restrictions. In the current article, a computational model to implement binaural synthesis in a hard real-time AVE is presented and evaluated. The computer model was implemented in an open-source auralization system. Measurements and real-time simulations on a university classroom were carried out to perform a reverberation time parameters validation and a system performance evaluation. Also, measured and simulated binaural soundtracks (composed from anechoic stimuli) were compared in terms of three selected perceptual attributes for subjective evaluations of static positions. The results showed that real-time performance was acceptable according to values previously reported in the literature and that computer prediction errors for the measured parameters were within the subjective difference limens. The computational model was able to generate an AVE with an acceptable overall perceptual quality.