Abstract
In complex acoustic environments such as multiple connected rooms, reverberation is highly dependent on the posi tions of sound sources and listeners — not only in terms of early reflections, but late reverberation as well. Modeling this positional dependency accurately is important for immersive, interactive applications such as virtual reality, augmented reality, and video games, where reverberation needs to be adapted in real time as sound sources and listeners move. The recently proposed modal decomposition of acoustic radiance transfer (MoD-ART) method can evaluate position-dependent late reverberation characteristics in real time, based on physical properties of the modeled environment, and it is specifically designed for complex acoustic environments. The reverberation characteristics' auralization (i.e. their application to audio signals) can be accomplished either with convolution or with delay-based reverberators. In this paper, we propose a method to auralize late reverberation efficiently in the presence of multiple sound sources and listeners, based on the MoD-ART model. The proposed method inherits the favorable complexity scaling of MoD-ART's modeling and extends it to the aspect of auralization, enabling the rendering of late reverberation in scenarios with hundreds of interactive sound sources and listeners. Furthermore, the proposed method can model fully dynamic scenarios (meaning both sources and listeners may move) correctly and with no rendering latency.