I’m glad to hear you’re exploring audio support in Omniverse apps :). It’s still pretty early in its development and will hopefully have some new features coming up in future releases.
For your question about multiple listeners, Omniverse based apps don’t currently support that functionality in a single instance of the app. However, it is technically possible with future enhancements.
Unfortunately, the module you’ve found is just the python binding library that exposes some bits of the high level audio system. It’s really just a thin shim layer over the high level audio system code. There are a couple layers under that in various C++ modules that you’d need to make use of to build out multiple listener functionality. The low level audio system is located in
The general problem with multiple listeners is that in a single desktop/laptop setup, there’s only a single set of speakers (per audio device) for a user to hear from. Typically with audio systems it’s assumed that the ‘listener’ object will be simulating the user’s location in virtual space. Thus all the spatial audio calculations are based around the position of this single listener entity. If that listener changes locations quickly to do a kind of ‘time sharing listener’, it just generally results in unexpected behaviour from the audio output.
That said however, it is technically possible to have such an effect with
carb.audio by running multiple audio ‘contexts’. Each context owns a single listener and outputs to a single device on the local system. If each context is outputting to a different device (ie: 2+ USB/bluetooth headsets connected to the system), you could effectively have multiple listeners active in the scene. This was part of our original audio system design for Omniverse, but it had to be simplified somewhat just to be able to get something out there to start.
What is the specific effect or use case you’re looking for? Maybe there’s some way to handle it with the way the audio system currently works in Omniverse apps. For example, it would be possible to have multiple different users have the same USD stage open and each one selects a different listener object as their active one. The active listener can be selected programmatically in python code too. As long as each of those users makes the listener change on a local layer of the stage, it should work (though it’s not something i’ve specifically tested yet).