Error with gRPC calls to streaming player

I’m trying to send an audio stream to the streaming player. I’ve tried various iterations of the test_client.py code, but I’m getting the following error:
Error sending start marker: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = “Exception iterating requests!”
debug_error_string = “None”

I’m testing with the provided demo_fullface_streaming scene.
I’ve even asked ChatGPT for help (lol) and it’s given up and points me to the gRPC server developer.

Hello @Lgraync! I’ve shared this post with the dev team for further assistance. In case it helps, here is a link to our documentation for the Audio Streaming Player extension.

https://docs.omniverse.nvidia.com/prod_extensions/app_audio2face/user-manual/audio2face-tool/streaming-audio-player.html

I’m not a grpc export, but there’s an example code
D:\ov\lib\prod-audio2face-2022.2.0\exts\omni.audio2face.player\omni\audio2face\player\scripts\instances.py

find the send_example_track(self): around line 469
This is what A2F uses when you rightclick on the streaming audio player and try send example audio.
Have you tried it?

You can find this a2f.player folder on Extension Manager folder button

Thanks @WendyGram !

@esusantolim thanks for the suggestion. Unfortunately, that example is sending an audio file (voice_male_p1_neutral.wav) to the player. I could convert my audio stream to a file and send the file like that, but I prefer to just pass the audio stream directly to the player.

I know it can be done, and the provided test_client.py also includes code that references this capability. The problem is, it “fakes” this capability by starting with an audio file, not a stream, I partially understand why they do this for an example (they don’t need access to a live stream), but it doesn’t really help someone like me who actually wants to use a real audio stream.

Here is the comment from the relevant code in test_client.py :

This function pushes audio chunks sequentially via PushAudioStreamRequest()
The function emulates the stream of chunks, generated by splitting input audio track.
But in a real application such stream of chunks may be aquired from some other streaming source.

I don’t have a solution to your problem - but if you don’t mind I’d like to ask you a few questions if you ever got test_client.py working. Were you able to connect to the Audio2Face streaming instance? Did you use an absolute path or did you just the prim path provided in Audio2Face (/World/audio2face/PlayerStreaming)?

Thanks!

Hi @alexi I was only able to get it working by pushing an audio file, not a stream. I used the prim path to connect to the streaming instance.

I’ve pushed a stream - I took the riva tts demo example and modified that to use my own tts model.