Existing Riva TTS OmniGraph Node?

Hey everyone,
I want an OmniGraph node that takes text (possibly language and voice) as input and sends that information to my Riva Server (with TTS models) using gRPC and returns as output the audio ofthe text being spoken. I gues I could create this node on my own using the python Rive TTS tutorials/samples included with the Riva Quickstart, but I find it hard to believe that such things aren’t already available.

Please advise on what I should use or if I should code it up myself. Or maybe there’s a gRPC node I can work with?

Hi @daniel.levine. I know we’re always working on incorporating more NVIDIA SDKs into Omniverse. Let me see if this is on the roadmap.

How’s the roadmap search going?

While I’m asking for Riva nodes, it would be great to have nodes for all the Riva capabilities: Riva ASR, Q&A, etc.

Hi @daniel.levine. Still working on this. Sorry for the delay.

Today, the only Riva integration that we have in OV is the Riva TTS extension in Audio2Face: Riva Text to Speech Integration Example with Streaming Audio Player in Omniverse Audio2Face - YouTube. It ships with Audio2Face, but should work in other apps too.

The opportunity is knocking to create your own Riva nodes and the Riva TTS extension might have some useful snippets for you.

Let me know if you have any other questions!

Thanks. I was considering doing that but didn’t want to reinvent the wheel. It’s a very logical thing to be provided so developers can stitch them together.

I guess I’ll look into developing it on my own.

(www.blackberry.com)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.