Can we use Riva as a standalone package?

Hi,

I am using Jetpack 5.0.2, you have included riva. But I can’t find riva in the jetpack. the only way I know is to download docker image from ngc, and then using riva in the docker container.

Can I use riva without docker container?

Can you please share the command to download the container? I tried to find a container which can support Jetson NX but failed.thank you!

Hi @OnePieceOfDeepLearning

Thanks for your interest in Riva

I will check with internal team,
but I guess Riva works based on docker

Thanks

Hi @wang.liang

Thanks for your interest in Riva

You can refer the embedded quickstart guide

and in general
https://docs.nvidia.com/deeplearning/riva/user-guide/docs/quick-start-guide.html#embedded

If that is what you are looking for

Thanks

@rvinobha ,
thank you rvinobha for your quick reply!
Actually I have read the webpage you shared several times already. first thing I tried were following the instruction on that page(riva_quickstart_arm64_v2.6.0).

After command bash riva_init.shand bash riva_start.sh successfully , then I tried: riva_asr_client --audio_file=/opt/riva/wav/en-US_sample.wav, then I got a error message:
I1013 11:35:04.023735 23 grpc_server.cc:4544] Started GRPCInferenceService at 0.0.0.0:8001
I1013 11:35:04.028112 23 http_server.cc:3242] Started HTTPService at 0.0.0.0:8000
I1013 11:35:04.135290 23 http_server.cc:180] Started Metrics Service at 0.0.0.0:8002
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Riva server is ready…
Use this container terminal to run applications:
root@08c3d3a7f804:/opt/riva# riva_asr_client --audio_file=/opt/riva/wav/en-US_sample.wav
I1013 11:36:34.613380 195 riva_asr_client.cc:445] Using Insecure Server Credentials
Loading eval dataset…
filename: /opt/riva/wav/en-US_sample.wav
Done loading 1 files
RPC failed: Error: Unavailable model requested. Lang: en-US, Type: offline
Done processing 1 responses
Some requests failed to complete properly, not printing performance stats
The key parameters I set in config.sh:
riva_target_gpu_family=“tegra”
riva_arm64_legacy_platform=“xavier”
service_enabled_asr=true
service_enabled_nlp=false
service_enabled_tts=false
language_code=(“en-US”) , if I change to language_code=(“en-US” “zh-CN”), then bash riva_start.sh cannot start.
asr_acoustic_model=(“conformer”)
MODEL_DEPLOY_KEY=“tlt_encode”
Can you please help to figure out why?

But this command can execute successfully:
riva_streaming_asr_client --audio_file=/opt/riva/wav/en-US_sample.wav

So I want to try a container which may be easier. should I use Riva Speech Clients?Riva Speech Clients | NVIDIA NGC

Hi @wang.liang

RPC failed: Error: Unavailable model requested. Lang: en-US, Type: offline

Prebuilt offline ASR model is not packaged by default. It has to be deployed using a RMIR, user can enable the offline ASR RMIR in the quickstart config.sh and do a riva_init.sh to deploy that.

But this command can execute successfully: riva_streaming_asr_client --audio_file=/opt/riva/wav/en-US_sample.wav

Since the prebuilt streaming ASR model is packaged by default, this works out of the box. Due to space constraints on Jetson, only streaming version of all ASR languages are packaged as prebuilt.

if I change to language_code=(“en-US” “zh-CN”), then bash riva_start.sh cannot start.

Only one language at a time can be set in the language_code field. Documentation snippet in the config.sh file on how to use this field.

I want to try a container which may be easier. should I use Riva Speech Clients?*

These clients are provided in the Riva server image itself, so the way you are trying to use is correct.

Can I use riva without docker container?

Riva is supported through docker container only in the latest public releases.

@rvinobha ,
Hi rvinobha,
Thank you for your reply.
I’m trying to enable the ffline ASR RMIR in the quickstart config.sh . and moved the command RMIR ($riva_model_loc/rmir) to below position:

riva_model_loc=“riva-model-repo”
riva_rmir_loc=“pwd/model_repository”

if [[ $riva_target_gpu_family == “tegra” ]]; then
riva_model_loc=“pwd/model_repository”
#riva_model_loc=“pwd/models”
fi
RMIR ($riva_model_loc/rmir)

but i got an error:
riva_quickstart_arm64_v2.6.0/config.sh: line 77: syntax error near unexpected token `$riva_model_loc/rmir’

Seems RMIR is not a recognized command. Is there some other software package I missed?

HI @wang.liang

Apologies, Sometimes this kind of error may happen if something not correct with config.sh

request to kindly share the config.sh for review

config.sh (10.0 KB)
Hi, @rvinobha , Sorry for my late reply. Here is the config.sh I used.

Hi @wang.liang

Apologies for the delay,

I have updates, the config.sh was not right
I will share a config file now in this reply,
The streaming model is commented and offline model enabled
Please try with this new config.sh shared and let us know if you face any further issue

Thanks
config.sh (10.2 KB)