Hi, I have created some models with Nemo and have jarvis working fine with trained models for sentiment, named entitity recognition etc, however I am not sure how I can export to support live voice or a character backed by Nemo/Jarvis as per the raindrop character video. All I can see is USD export and maya export which is fine if I want to do a movie or a cut scene in a game , I’m not sure how to utilize the TensorRT engine behind the audio2face UI. Anybody know an API that might help here?
Hi @mark.hembrowlm4xe The audio2face beta app does not currently support this pipeline. Stay tuned for updates.
To give a bit more info on this. We had a custom Jarvis Client API that handled this communication between Jarvis output to both Audio2Face and other parts of Omniverse Kit at the time that drove the character behavior. I need to double check, I don’t believe this API is part of the latest Jarvis release at the moment. If there are interests in such API, let us know.
Hi @siyuen ,
Very interested, yes clearly Javis is present in screenshots of the Omniverse UI in some of the demos but not available in the current beta of Audio2Face. I’ve looked into how I could write an audio input to Omniverse but so far I can’t see anywhere that I can input an audio stream.
Noted. Will keep everyone updated if there are updates to Jarvis connection, and streaming audio updates.
We do have audio support in Kit but I don’t think it supports audio streaming, but I can find out and let you know.
The other thing is the original streaming audio was just a python library we used to handle the audio streaming and fed to A2F. It was not something custom.