First off: “IRL” sound is important to judge directions - you can absolutely hear what direction something is coming from - so simulating that better in a game should help make this game “feel” more realistic, and better. On the technical side, sound transport and light transport are - conceptually - actually very similar; though there’s differences in “how” things reflect you still need frequent “line of sight” computations, which are exactly what ray tracing does - so yes, having fast ray tracing should help in making sound simulation better/more accurate.
Eric notes two things: VRWorks - Audio | NVIDIA Developer is from NVIDIA and may be just what the poster wants. For more on research in the area, a good place to start might be “Guided Multiview Ray Tracing for Fast Auralization” by Micah Taylor, Anish Chandak, Qi Mo, Christian Lauterbach, Carl Schissler, and Dinesh Manocha, 2012
Response from Tony Scudiero:
There’s a good history of ray tracing in audio: there are a number of commercial products that use ray methods for generating synthetic room impulse response filters. RTX technology is actually very good for acoustic simulations, as the material interactions of sounds are usually modeled at a coarser granularity than interactions of material and light. Acoustic simulations tend to have simple shaders, making their performance fundamentally a function of ray-scene queries, which RTX accelerates quite well!
One of the fundamental challenges of ray tracing acoustic energy is that the wavelengths in question are about 1 million times longer than visible light. Wavelengths can be on the order of a meter, which is the same order of magnitude as many objects. The consequence is that many effects must be treated over a cross-sectional area of the wavefront: the interaction of sound energy with a surface cannot be accurately modeled only at an infinitesimal point. That said, there has been some research on how these effects can be treated using ray tracing techniques. The ‘right’ approach usually depends on your goals: accuracy or speed.
From a technological perspective, there’s absolutely nothing standing in the way of writing a real-time acoustics simulation using ray tracing graphics APIs like DXR or VkRay to do sound propagation simulation in tandem with ray tracing graphics. The available ray-tracing power of current-generation GPUs should be able to handle a moderately complex acoustic simulation in tandem with graphics. Depending on how the graphics rendering engine is designed, primary rays could be used for both purposes, further economizing the simulation. While this is perfectly possible, I’m not aware of anyone that has actually done this in one of the graphics APIs.
NVIDIA’s VRWorks Audio, which is a relatively simple acoustics simulation intended for interactive experiences, uses OptiX. Version 2.0 of that SDK can make use of RTX hardware when available.