Hi,
riva_start.sh doesn’t load every models, I wonder if this happen due to my GPU.
Hardware - GPU TU104 [GeForce RTX 2070 SUPER]
Hardware - CPU Intel® Core™ i9-9900 CPU @ 3.10GHz × 16
Operating System Ubuntu 20.04.5 LTS
Riva Version 2.6.0
log.txt (105.6 KB)
Hi @hadrien.goutas
Thanks for your interest in Riva
Yes we recommend having GPU with 16+ GB VRAM
Please find the support matrix for reference
https://docs.nvidia.com/deeplearning/riva/user-guide/docs/support-matrix.html
In config.sh
However we can try running only one model for example if we want to run ASR, we can set service flag for what we are interested to true and rest all to false
we can also further drill down to the service we are interested to run (lets say nlp) and further disable the models that we don’t need by adding # to start
Thanks
1 Like
Thanks for your reply,
I tried with tts only but got the same output…
Hi @hadrien.goutas
Can we try running one nlp model
Thanks
Hi,
Same output, here is my config.sh
config.sh (9.6 KB)
Hi @hadrien.goutas
If the riva logs states CUDA out of memory or related errors, then it is something with GPU
Can you share the riva_init and riva_start logs
Hi,
Here are the logs:
init_logs.txt (11.8 KB)
I can’t put 2 files in a message, here is the start logs
start_logs.txt (103.5 KB)
Hi @hadrien.goutas
My Apologies, Logs indicate problem/issue with GPU Memory,
So we recommend trying a GPU suggested as per below link
https://docs.nvidia.com/deeplearning/riva/user-guide/docs/support-matrix.html#server-hardware
Also on another note, some other Applications/process might be using the GPU (apart from Riva), so we can check that using command nvidia-smi
and try to close them and give a try once