Successfully deployed Riva to Docker image on local machine via WSL and CUDA

Success! I received the sweet sound of Riva say “Hello, This is a speech synthesizer!” Music to my ears! If you are stuck, and have a similar setup, I may be able to help. Just drop me a line.

Now I need to figure out how to get Riva into a web app! ugh! Please let me know if you can help with that! Cheers!

Hardware - GPU (GeForce 3800)
Hardware - CPU Intel I5
Operating System Win 10
Riva Version 2.8.1

Hi @niels.maclean

Thanks for your interest in Riva

we have a existing sample apps (refer below link), you can take them as reference

I will check with internal team and provide further inputs

Thanks

Hello Niels

I am trying to get Riva with Speech to Text and Text to Speech to work on my local machine.
I’m using Cuda, WSL, Docker

HW - GPU: GeForce RTX 2070, CPU: Intel(R) Core™ i7-3820 CPU @ 3.60GHz 3.60 GHz
SW - Win 10

If i look at my riva_init.sh logs i’m using - riva-speech:2.14.0

Currently troubleshooting these errors

  • RMRIs not downloading. Investigating if i need to generate my own
  • No .riva file. do i need to build one
  • grcp_health_probe error
  • Waiting for RIva Server to Load all models. . .retrying in 10 seconds → until fail(Helth redy check failed

Any insights would be marvelous :)