This looks like it could be a major leap forward for audio realism in 3D games. Whereas most methods require setting up an algorithmic or convolution reverb that you think matches the space, this is taking the data from the engine for the space and materials the sound is bouncing off and feeds it into multiple reverbs that are blended in realtime based on the position of the character. Also going to be open source.
You must log in or # to comment.
I’m surprised this isn’t common practice already considering the absurd amount of R&D that goes into contemporary visual effects. Cool demo.
I found a YouTube link in your post. Here are links to the same video on alternative frontends that protect your privacy:



