Hi !
Our team is using Isaac Sim to run realistic audio simulations and we are loving it so far ! However, we are running into a few limitations that we would like to deal with by tweaking the source code to fit our needs. For example, we would like to have more than one active listener at the same time and we would like to simulate some more audio phenomena.
I’ve used the VSCode debugger extension to step into the code and it seems that the code we’re looking for is in the _audio.cpython-37m-x86_64-linux-gnu.so dynamic library (located in isaac_sim-2021.1.1/kit/extscore/omni.usd/omni/usd/audio/) : is there any way we could access its source code ? If not, is there any workaround we could use to meet our needs ?
Thanks !
I’ve passed the request along to the omniverse audio team to see if there is a way to solve your usecase, thanks!
Hi @francis.cardinal.01,
I’m glad to hear you’re exploring audio support in Omniverse apps :). It’s still pretty early in its development and will hopefully have some new features coming up in future releases.
For your question about multiple listeners, Omniverse based apps don’t currently support that functionality in a single instance of the app. However, it is technically possible with future enhancements.
Unfortunately, the module you’ve found is just the python binding library that exposes some bits of the high level audio system. It’s really just a thin shim layer over the high level audio system code. There are a couple layers under that in various C++ modules that you’d need to make use of to build out multiple listener functionality. The low level audio system is located in carb.audio-forge.plugin
.
The general problem with multiple listeners is that in a single desktop/laptop setup, there’s only a single set of speakers (per audio device) for a user to hear from. Typically with audio systems it’s assumed that the ‘listener’ object will be simulating the user’s location in virtual space. Thus all the spatial audio calculations are based around the position of this single listener entity. If that listener changes locations quickly to do a kind of ‘time sharing listener’, it just generally results in unexpected behaviour from the audio output.
That said however, it is technically possible to have such an effect with carb.audio
by running multiple audio ‘contexts’. Each context owns a single listener and outputs to a single device on the local system. If each context is outputting to a different device (ie: 2+ USB/bluetooth headsets connected to the system), you could effectively have multiple listeners active in the scene. This was part of our original audio system design for Omniverse, but it had to be simplified somewhat just to be able to get something out there to start.
What is the specific effect or use case you’re looking for? Maybe there’s some way to handle it with the way the audio system currently works in Omniverse apps. For example, it would be possible to have multiple different users have the same USD stage open and each one selects a different listener object as their active one. The active listener can be selected programmatically in python code too. As long as each of those users makes the listener change on a local layer of the stage, it should work (though it’s not something i’ve specifically tested yet).
thanks,
eric
Thanks a lot for your detailed answer ! I’ll look into it !