Use NanoLLM with Riva Speech Server for Spanish asr_models_languages_map: Failed

Hello!

Can you help me?

I’m trying to run NanoLLM with Riva Speech Server and Spanish language.

Platform:
* System: Linux
* Distribution: Ubuntu 22.04 Jammy Jellyfish
* Release: 5.15.148-tegra
* Python: 3.10.12

Libraries:
* CUDA: 12.6.68
* CUDNN: 9.3.0.75
* TensorT: 10.3.0.30
* VPI: 3.2.4
* Vulkan: 1.3.204
* OpenCV: 4.10.0 with CUDA: YES

Hardware:

* Model: NVIDIA Jetson Orin NX Engineering Reference Developer Kit Super
* 699-level Part Number: 699-13767-0000-301 G.1
* P- Number: p3767-0000
* Module: NVIDIA Jetson Orin NX (16GB ram)
* Soc: tegra234
* LAT: 36.4.3
* Jetpack: 6.2

Requirements and Setup for Spanish ASR:

./riva_stop.sh
Shutting down docker containers...
./riva_clean.sh
Cleaning up local Riva installation.
Image nvcr.io/nvidia/riva/riva-speech:2.17.0-l4t-aarch64 found. Delete? [y/N] y
Error response from daemon: get home/silenzio/lib/riva_quickstart_arm64_2.19.0/model_repository: no such volume
'/home/silenzio/lib/riva_quickstart_arm64_2.19.0/model_repository' is not a Docker volume, or has already been deleted.
Found models at '/home/silenzio/lib/riva_quickstart_arm64_2.19.0/model_repository'. Delete? [y/N] y

Change this line in “config.sh”:

# Specify ASR language to deploy, as defined in "asr_models_languages_map" above
# For multiple languages, enter space separated language codes
asr_language_code=("es-ES") ## en-US")
./riva_init.sh
Please enter API key for ngc.nvidia.com: 
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
  > Pulling nvcr.io/nvidia/riva/riva-speech:2.19.0-l4t-aarch64. This may take some time...
--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_asr_conformer_es_es_str_v2.19.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 301.27 MB
   Started at: 2025-06-02 11:42:31
   Completed at: 2025-06-02 11:43:03
   Duration taken: 31s
--------------------------------------------------------------------------------
...

Run Riva Speech Server:

./riva_start.sh
...
Waiting for Riva server to load all models...retrying in 10 seconds
status: SERVING
Riva server is ready...
Use this container terminal to run applications:
root@ce9ba0d09d38:/opt/riva#

Run NanoLLM container:

jetson-containers run $(autotag nano_llm) \
  python3 -m nano_llm.agents.web_chat --api=mlc \
    --model Efficient-Large-Model/VILA-7b \
    --asr=riva --tts=piper

Have this error:

...
* Running on all addresses (0.0.0.0)
 * Running on https://127.0.0.1:8050
 * Running on https://192.168.2.41:8050
14:46:32 | INFO | Press CTRL+C to quit
Exception in thread RivaASR:
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/opt/NanoLLM/nano_llm/plugins/speech/riva_asr.py", line 117, in run
    self.generate(self.audio_queue)
  File "/opt/NanoLLM/nano_llm/plugins/speech/riva_asr.py", line 134, in generate
    for response in responses:
  File "/usr/local/lib/python3.10/dist-packages/riva/client/asr.py", line 387, in streaming_response_generator
    for response in self.stub.StreamingRecognize(generator, metadata=self.auth.get_auth_metadata()):
  File "/usr/local/lib/python3.10/dist-packages/grpc/_channel.py", line 543, in __next__
    return self._next()
  File "/usr/local/lib/python3.10/dist-packages/grpc/_channel.py", line 969, in _next
    raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
	status = StatusCode.INVALID_ARGUMENT
	details = "Error: Unavailable model requested given these parameters: language_code=en; sample_rate=16000; type=online; "
	debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2025-06-02T14:46:32.361579974+03:00", grpc_status:3, grpc_message:"Error: Unavailable model requested given these parameters: language_code=en; sample_rate=16000; type=online; "}"

Hi,

Does it work normally when using en-US?
If not, please give it a try.

This test helps us to figure out if this issue is specific to the different languages.

Thanks.

Yes work normally with en-US.

Does not work with non-English languages.

Please help.
Thank you!

Hi,

Based on the error:

"Error: Unavailable model requested given these parameters: language_code=en; sample_rate=16000; type=online; "

The model might not support the language you used.
We will give it a try internally and update more info to you later.

Thanks.

“The model might not support the language you used.”

Do you mean NanoLLM model or Riva model?

Hi,

The error is generated from the RIVA.

Thanks.

I’m waiting for updates from you…
Thank you!

Hi,

We are checking this issue internally.
Will update more info to you later.

Thanks.

Hi,

Thanks for your patience.
The language code “es-ES” can work with RIVA. Here is our testing in detail:

If only modify asr_language_code=("es-ES"), RIVA shows the below error as NLP only supports English:

$ bash riva_init.sh
Error: NLP not supported for languages other than English

So we disable NLP to allow it to work with other languages:

--- a/config.sh
+++ b/config.sh
@@ -18,7 +18,7 @@ riva_tegra_platform="orin"

 # For any language other than en-US: service_enabled_nlp must be set to false
 service_enabled_asr=true
-service_enabled_nlp=true
+service_enabled_nlp=false
 service_enabled_tts=true
 service_enabled_nmt=false

@@ -57,7 +57,7 @@ asr_acoustic_model=("conformer")

 # Specify ASR language to deploy, as defined in "asr_models_languages_map" above
 # For multiple languages, enter space separated language codes
-asr_language_code=("en-US")
+asr_language_code=("es-ES")

 # Specify ASR accessory model from below list, prebuilt model available only when "asr_acoustic_model" is set to "parakeet_1.1b"
 # "diarizer" : deploy ASR model with Speaker Diarization model
diff --git a/riva_start.sh b/riva_start.sh
index 8ad07a9..0cbdb07 100644
--- a/riva_start.sh
+++ b/riva_start.sh
@@ -101,7 +101,7 @@ if [ $(docker ps -q -f "name=^/$riva_daemon_speech$" | wc -l) -eq 0 ]; then
     docker run -d \
         --init \
         --ipc=host \
-        --gpus '"'$gpus_to_use'"' \
+        --runtime=nvidia \
         -p $riva_speech_api_port:$riva_speech_api_port \
         -p $riva_speech_api_http_port:$riva_speech_api_http_port \
         -e RIVA_SERVER_HTTP_PORT=$riva_speech_api_http_port \

Then we test it with a default file and it can work correctly.

# riva_streaming_asr_client --audio_file=/opt/riva/wav/es-ES_sample.wav
I0611 05:53:26.773959  4100 grpc.h:101] Using Insecure Server Credentials
Loading eval dataset...
filename: /opt/riva/wav/es-ES_sample.wav
Done loading 1 files
in rio
in rio
in rio
and
in rio
in rio de
rio
rio
rio tigris
rio de grist
tigris
tigris
tigris
tigris
rio grist
rio
rio tenaya
rio tania
rio tigris
rio tigris on
rio tigris
rio tigris
rio tigris ten on
rio tigris ten on
Rio Tigris, ten Tipo.
-----------------------------------------------------------
File: /opt/riva/wav/es-ES_sample.wav

Final transcripts:
0 : Rio Tigris, ten Tipo.

Timestamps:
Word                                    Start (ms)      End (ms)        Confidence

Rio                                     440             680             3.3618e-02
Tigris,                                 840             1400            9.3009e-03
ten                                     1520            1680            7.9300e-02
Tipo.                                   2080            2400            7.8500e-03


Audio processed: 4.4800e+00 sec.
-----------------------------------------------------------

Not printing latency statistics because the client is run without the --simulate_realtime option and/or the number of requests sent is not equal to number of requests received. To get latency statistics, run with --simulate_realtime and set the --chunk_duration_ms to be the same as the server chunk duration
Run time: 6.9652e-01 sec.
Total audio processed: 5.9760e+00 sec.
Throughput: 8.5798e+00 RTFX

Please let us know if you can also use RIVA with an es-ES video first.
Thanks.

Thank you very much!

I will try and let you know.

Hello!

Sorry, but in my case it does not work even in English.

Make these changes:

+++ b/config.sh
@@ -18,7 +18,7 @@ riva_tegra_platform="orin"

 # For any language other than en-US: service_enabled_nlp must be set to false
 service_enabled_asr=true
-service_enabled_nlp=true
+service_enabled_nlp=false
 service_enabled_tts=true
 service_enabled_nmt=false


+++ b/riva_start.sh
@@ -101,7 +101,7 @@ if [ $(docker ps -q -f "name=^/$riva_daemon_speech$" | wc -l) -eq 0 ]; then
     docker run -d \
         --init \
         --ipc=host \
-        --gpus '"'$gpus_to_use'"' \
+        --runtime=nvidia \
         -p $riva_speech_api_port:$riva_speech_api_port \
         -p $riva_speech_api_http_port:$riva_speech_api_http_port \
         -e RIVA_SERVER_HTTP_PORT=$riva_speech_api_http_port \

Init:

$ ./riva_init.sh
Please enter API key for ngc.nvidia.com: 
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
  > Image nvcr.io/nvidia/riva/riva-speech:2.19.0-l4t-aarch64 exists. Skipping.

Downloading models (RMIRs) from NGC...
Note: this may take some time, depending on the speed of your Internet connection.
To skip this process and use existing RMIRs set the location and corresponding flag in config.sh.
2025-06-11 17:47:33 URL:https://xfiles.ngc.nvidia.com/org/nvidia/team/ngc-apps/recipes/ngc_cli/versions/3.48.0/files/ngccli_arm64.zip?versionId=sPn0KF0IeLN_9vFxB35JiAi3I4VPz.AW&Signature=Sa~1dU39jFV~G8u7KQ4UiIQQOX0nu8--I6sGFdX6U1G2u9kuZ9DJ2FQMNo1jeMHhnB1Xa-KOxps~BHvKMouNqFTam~EGwmGuQ9TfWKxZCK2npx-oVKLjFFjJDAjH-ix2VGeJDl-KTScaZP4-MDBmjCxjGRHjMy7~KvNb8zYDgZiqbMeZ7G3pGrF6UwGkcuJEGitLh-i9M7EQOlIZvPcVphSGgO~d2rkLfRJLqswr7cYoGWFcfvHKSXCUh1UzoZOxtSxo7-qREkIg4otFrSwGqq1qTdtT-xKJ8ZoFVUORXBbVneA-Rp72MB2gvCisgFXQkRpG4Tyxj2u1RB84glIzEg__&Expires=1749750448&Key-Pair-Id=KCX06E8E9L60W [50007324/50007324] -> "ngccli_arm64.zip" [1]
/opt/riva

CLI_VERSION: Latest - 3.152.2 available (current: 3.48.0). Please update by using the command 'ngc version upgrade' 

Getting files to download...
--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_asr_conformer_en_us_str_v2.19.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 802.72 MB
   Started at: 2025-06-11 17:47:38
   Completed at: 2025-06-11 17:48:54
   Duration taken: 1m 15s
--------------------------------------------------------------------------------
Getting files to download...
--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_nlp_punctuation_bert_base_en_us_v2.19.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 191.71 MB
   Started at: 2025-06-11 17:51:00
   Completed at: 2025-06-11 17:51:19
   Duration taken: 18s
--------------------------------------------------------------------------------

+ [[ tegra != \t\e\g\r\a ]]
+ [[ tegra == \t\e\g\r\a ]]
+ '[' -d /home/silenzio/lib/riva_quickstart_arm64_2.19.0/model_repository/rmir ']'
+ [[ tegra == \t\e\g\r\a ]]
+ '[' -d /home/silenzio/lib/riva_quickstart_arm64_2.19.0/model_repository/prebuilt ']'
+ echo 'Converting prebuilts at /home/silenzio/lib/riva_quickstart_arm64_2.19.0/model_repository/prebuilt to Riva Model repository.'
Converting prebuilts at /home/silenzio/lib/riva_quickstart_arm64_2.19.0/model_repository/prebuilt to Riva Model repository.
+ docker run -it -d --rm -v /home/silenzio/lib/riva_quickstart_arm64_2.19.0/model_repository:/data --name riva-models-extract nvcr.io/nvidia/riva/riva-speech:2.19.0-l4t-aarch64
+ docker exec riva-models-extract bash -c 'mkdir -p /data/models; \
      for file in /data/prebuilt/*.tar.gz; do tar xf $file -C /data/models/ &> /dev/null; done'
+ docker container stop riva-models-extract
+ '[' 0 -ne 0 ']'
+ echo

+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.

Run:

root@74eafd2cd5f1:/opt/riva# riva_streaming_asr_client --audio_file=/opt/riva/wav/en-US_sample.wav
I0611 17:56:35.450798  4342 grpc.h:101] Using Insecure Server Credentials
Loading eval dataset...
filename: /opt/riva/wav/en-US_sample.wav
Done loading 1 files
Error: Unavailable model requested given these parameters: language_code=en; sample_rate=16000; type=online; 
Not printing latency statistics because the client is run without the --simulate_realtime option and/or the number of requests sent is not equal to number of requests received. To get latency statistics, run with --simulate_realtime and set the --chunk_duration_ms to be the same as the server chunk duration
Run time: 0.00185673 sec.
Total audio processed: 4.152 sec.
Throughput: 2236.19 RTFX
$ jetson-containers run $(autotag nano_llm) \
  python3 -m nano_llm.agents.web_chat --api=mlc \
    --model Efficient-Large-Model/VILA-7b \
    --asr=riva --tts=piper

...

Exception in thread RivaASR:
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/opt/NanoLLM/nano_llm/plugins/speech/riva_asr.py", line 117, in run
    self.generate(self.audio_queue)
  File "/opt/NanoLLM/nano_llm/plugins/speech/riva_asr.py", line 134, in generate
    for response in responses:
  File "/usr/local/lib/python3.10/dist-packages/riva/client/asr.py", line 387, in streaming_response_generator
    for response in self.stub.StreamingRecognize(generator, metadata=self.auth.get_auth_metadata()):
  File "/usr/local/lib/python3.10/dist-packages/grpc/_channel.py", line 543, in __next__
    return self._next()
  File "/usr/local/lib/python3.10/dist-packages/grpc/_channel.py", line 969, in _next
    raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
	status = StatusCode.INVALID_ARGUMENT
	details = "Error: Unavailable model requested given these parameters: language_code=en; sample_rate=16000; type=online; "
	debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2025-06-11T20:43:12.908040808+03:00", grpc_status:3, grpc_message:"Error: Unavailable model requested given these parameters: language_code=en; sample_rate=16000; type=online; "}"

This part of code in “riva_start.sh” actually not run

docker run -d \
        --init \
        --ipc=host \
        --runtime=nvidia \ 
        -p $riva_speech_api_port:$riva_speech_api_port \
        -p $riva_speech_api_http_port:$riva_speech_api_http_port \

Because of this message:

Riva Speech already running. Skipping...
...

This part of code in “riva_start.sh” run:

# Give terminal access for arm64/embedded to try clients, use -s (server only) to disable
if [[ $riva_target_gpu_family == "tegra" ]]; then
    if [[ "$2" != "-s" ]]; then
        echo "Use this container terminal to run applications:"
        docker exec -it $riva_daemon_speech /bin/bash
    fi
fi

Hi,

Some tricks are required to run RIVA on JetPack 6.2.
Could you check the below comment to see if all the WAR has been applied?

Thanks.

Hello!
Thank you for your help!

I made fresh install of riva server:

Downgrade Docker:

sudo apt-get install -y docker-ce=5:27.5* docker-ce-cli=5:27.5* --allow-downgrades
sudo systemctl restart docker
sudo usermod -aG docker $USER
newgrp docker

Get ngc cli:

wget --content-disposition https://ngc.nvidia.com/downloads/ngccli_arm64.zip && unzip ngccli_arm64.zip && chmod u+x ngc-cli/ngc
find ngc-cli/ -type f -exec md5sum {} + | LC_ALL=C sort | md5sum -c ngc-cli.md5
echo "export PATH='$PATH:$(pwd)/ngc-cli'" >> ~/.bash_profile && source ~/.bash_profile

Setup ngc cli:

$ ngc config set
Enter API key [no-apikey]. Choices: [<VALID_APIKEY>, 'no-apikey']: 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
Enter CLI output format type [ascii]. Choices: ['ascii', 'csv', 'json']: 
Enter org [no-org]. Choices: ['...']: ...
Enter team [no-team]. Choices: ['no-team']: no-team
Enter ace [no-ace]. Choices: ['no-ace']: no-ace
Validating configuration...
Successfully validated configuration.
Saving configuration...
Successfully saved NGC configuration to /home/silenzio/.ngc/config

file “/home/silenzio/.ngc/config”:

;WARNING - This is a machine generated file.  Do not edit manually.
;WARNING - To update local config settings, see "ngc config set -h" 

[CURRENT]
apikey = nvapi-...........................................
format_type = ascii
org = .......

Get riva_server:

$ ngc registry resource download-version nvidia/riva/riva_quickstart_arm64:2.19.0
Getting files to download...
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ • 156.8/156.8 KiB • Remaining: 0:00:00 • 153.8 kB/s • Elapsed: 0:00:02 • Total: 24 - Completed: 24 - Failed: 0
-----------------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path resource: /home/silenzio/Downloads/riva_quickstart_arm64_v2.19.0
   Total files downloaded: 24
   Total transferred: 156.83 KB
   Started at: 2025-06-12 13:30:28
   Completed at: 2025-06-12 13:30:30
   Duration taken: 2s
-----------------------------------------------------------------------------------------

Install riva_server:

cd riva_quickstart_arm64_v2.19.0/
mkdir model_repository/models -p
$ bash riva_init.sh
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
  > Pulling nvcr.io/nvidia/riva/riva-speech:2.19.0-l4t-aarch64. This may take some time...

Downloading models (RMIRs) from NGC...
Note: this may take some time, depending on the speed of your Internet connection.
To skip this process and use existing RMIRs set the location and corresponding flag in config.sh.
2025-06-12 10:33:07 URL:https://xfiles.ngc.nvidia.com/org/nvidia/team/ngc-apps/recipes/ngc_cli/versions/3.48.0/files/ngccli_arm64.zip?versionId=sPn0KF0IeLN_9vFxB35JiAi3I4VPz.AW&Signature=x03yX4tqGdFYQ2jh~RI10Ffho~JPuiEk8jFwdWutW3-14Cho0LYtBJFlErwHorzjh~6ds4lu7duZpxY~IQgx~qOYZWBy0g32BrmSlvjHNJOQOhsrGS7~nDxr-xISe5YlwNMVLqPEZohWw4m6fIjJwyT5tO4Tvv7jTczQc9UrsM7Broi2uwEBA-QSwLhQmrsQInnYcpQkvW3FpziZCT5coju3I85QPsiTWD9180obuiixh6C~WXd0fkfjFXUfRr~YgjtNC97A095bfxKm0z4Tsbe5VLI8fLsZ-Mf73m2zLJM24yFxNlNfdD44B68MNS0rND8KqJNl0YWkur6Nn5POvw__&Expires=1749810783&Key-Pair-Id=KCX06E8E9L60W [50007324/50007324] -> "ngccli_arm64.zip" [1]
/opt/riva

CLI_VERSION: Latest - 3.152.2 available (current: 3.48.0). Please update by using the command 'ngc version upgrade' 

Getting files to download...
  ━━ • … • Remaining: 0… • … • Elapsed: 0… • Total: 1 - Completed: 1 - Failed: 0

--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_asr_conformer_en_us_str_v2.19.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 802.72 MB
   Started at: 2025-06-12 10:33:12
   Completed at: 2025-06-12 10:34:27
   Duration taken: 1m 14s
--------------------------------------------------------------------------------
Getting files to download...
  ━━ • … • Remaining: 0… • … • Elapsed: 0… • Total: 1 - Completed: 1 - Failed: 0

--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_nlp_punctuation_bert_base_en_us_v2.19.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 191.71 MB
   Started at: 2025-06-12 10:36:33
   Completed at: 2025-06-12 10:36:51
   Duration taken: 18s
--------------------------------------------------------------------------------
Getting files to download...
  ━━ • … • Remaining: 0… • … • Elapsed: 0… • Total: 1 - Completed: 1 - Failed: 0
--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_nlp_punctuation_bert_base_en_us_v2.19.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 191.71 MB
   Started at: 2025-06-12 10:38:58
   Completed at: 2025-06-12 10:39:17
   Duration taken: 18s
--------------------------------------------------------------------------------
Getting files to download...
⠸ ━╸ • … • Remaining: 0… • … • Elapsed: 0… • Total: 1 - Completed: 0 - Failed: 0

--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_tts_fastpitch_hifigan_en_us_ipa_v2.19.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 187.44 MB
   Started at: 2025-06-12 10:39:20
   Completed at: 2025-06-12 10:39:38
   Duration taken: 17s
--------------------------------------------------------------------------------

+ [[ tegra != \t\e\g\r\a ]]
+ [[ tegra == \t\e\g\r\a ]]
+ '[' -d /home/silenzio/Downloads/riva_quickstart_arm64_v2.19.0/model_repository/rmir ']'
+ [[ tegra == \t\e\g\r\a ]]
+ '[' -d /home/silenzio/Downloads/riva_quickstart_arm64_v2.19.0/model_repository/prebuilt ']'
+ echo 'Converting prebuilts at /home/silenzio/Downloads/riva_quickstart_arm64_v2.19.0/model_repository/prebuilt to Riva Model repository.'
Converting prebuilts at /home/silenzio/Downloads/riva_quickstart_arm64_v2.19.0/model_repository/prebuilt to Riva Model repository.
+ docker run -it -d --rm -v /home/silenzio/Downloads/riva_quickstart_arm64_v2.19.0/model_repository:/data --name riva-models-extract nvcr.io/nvidia/riva/riva-speech:2.19.0-l4t-aarch64
+ docker exec riva-models-extract bash -c 'mkdir -p /data/models; \
      for file in /data/prebuilt/*.tar.gz; do tar xf $file -C /data/models/ &> /dev/null; done'
+ docker container stop riva-models-extract
+ '[' 0 -ne 0 ']'
+ echo

+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.

Run Riva Speech Services test (English languges):

./riva_start.sh
Starting Riva Speech Services. This may take several minutes depending on the number of models deployed.
Waiting for Riva server to load all models...retrying in 10 seconds
Riva server is ready...
Use this container terminal to run applications:

root@d418cb8c06c2:/opt/riva# riva_streaming_asr_client --audio_file=/opt/riva/wav/en-US_sample.wav
I0612 10:55:32.405516   311 grpc.h:101] Using Insecure Server Credentials
Loading eval dataset...
filename: /opt/riva/wav/en-US_sample.wav
Done loading 1 files
what
what
what is
what is
what is
what is now tilde
what is natural
what is natural
what is natural
what is natural language
what is natural language
what is natural language
what is natural language processing
what is natural language processing
what is natural language processing
what is natural language processing
what is natural language processing
what is tural language processing
what is language processing
What is natural language processing? 
-----------------------------------------------------------
File: /opt/riva/wav/en-US_sample.wav

Final transcripts: 
0 : What is natural language processing? 

Timestamps: 
Word                                    Start (ms)      End (ms)        Confidence      

What                                    920             960             1.9195e-01      
is                                      1200            1240            5.4835e-01      
natural                                 1720            2080            1.0869e-01      
language                                2240            2600            6.7237e-01      
processing?                             2720            3200            1.0000e+00      

Audio processed: 4.0000e+00 sec.
-----------------------------------------------------------

Not printing latency statistics because the client is run without the --simulate_realtime option and/or the number of requests sent is not equal to number of requests received. To get latency statistics, run with --simulate_realtime and set the --chunk_duration_ms to be the same as the server chunk duration
Run time: 7.3754e-01 sec.
Total audio processed: 4.1520e+00 sec.
Throughput: 5.6295e+00 RTFX
root@d418cb8c06c2:/opt/riva# 

Works!


Run nano_llm:

jetson-containers run $(autotag nano_llm)
jetson-containers run $(autotag nano_llm) \
  python3 -m nano_llm.agents.web_chat --api=mlc \
    --model Efficient-Large-Model/VILA-7b \
    --asr=riva --tts=piper

Works!


Install Other language:

Shutting down docker containers…

./riva_stop.sh

Check docker:

$ docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Change this line in “config.sh”:

# Specify ASR language to deploy, as defined in "asr_models_languages_map" above
# For multiple languages, enter space separated language codes
asr_language_code=("es-ES") ## en-US")

Install riva_server again:

cd riva_quickstart_arm64_v2.19.0/
mkdir model_repository/models -p
silenzio@jetsonnx:~/lib/riva_quickstart_arm64_v2.19.0$ bash riva_init.sh
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
  > Image nvcr.io/nvidia/riva/riva-speech:2.19.0-l4t-aarch64 exists. Skipping.
...
+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.

Test es-ES language:

./riva_start.sh
root@649ce50514e8:/opt/riva# riva_streaming_asr_client --audio_file=/opt/riva/wav/es-ES_sample.wav
I0612 13:14:02.002132  7733 grpc.h:101] Using Insecure Server Credentials
Loading eval dataset...
filename: /opt/riva/wav/es-ES_sample.wav
Done loading 1 files
in rio
in rio
in rio
and
in rio
in rio de
rio
rio
rio tigris
rio de grist
tigris
tigris
tigris
tigris
rio grist
rio
rio tenaya
rio tania
rio tigris
rio tigris on
rio tigris
rio tigris
rio tigris ten on
rio tigris ten on
Rio tigris ten tipo 
-----------------------------------------------------------
File: /opt/riva/wav/es-ES_sample.wav

Final transcripts: 
0 : Rio tigris ten tipo 

Timestamps: 
Word                                    Start (ms)      End (ms)        Confidence      

Rio                                     440             680             3.3618e-02      
tigris                                  840             1400            9.3009e-03      
ten                                     1520            1680            7.9300e-02      
tipo                                    2080            2400            7.8500e-03      


Audio processed: 4.4800e+00 sec.
-----------------------------------------------------------

Not printing latency statistics because the client is run without the --simulate_realtime option and/or the number of requests sent is not equal to number of requests received. To get latency statistics, run with --simulate_realtime and set the --chunk_duration_ms to be the same as the server chunk duration
Run time: 7.9993e-01 sec.
Total audio processed: 5.9760e+00 sec.
Throughput: 7.4706e+00 RTFX

Works!

Shutting down docker containers…

./riva_stop.sh

Change this line in “config.sh”:

# Specify ASR language to deploy, as defined in "asr_models_languages_map" above
# For multiple languages, enter space separated language codes
asr_language_code=(" ru-RU") ## en-US")

Install riva_server again:

silenzio@jetsonnx:~/lib/riva_quickstart_arm64_v2.19.0$ bash riva_init.sh
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
  > Image nvcr.io/nvidia/riva/riva-speech:2.19.0-l4t-aarch64 exists. Skipping.
...
+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.

Test ru-RU language:

root@649ce50514e8:/opt/riva# riva_streaming_asr_client --audio_file=/opt/riva/wav/ru-RU_sample.wav
I0612 13:13:45.072036  7693 grpc.h:101] Using Insecure Server Credentials
Loading eval dataset...
filename: /opt/riva/wav/ru-RU_sample.wav
Done loading 1 files
you give it
you give it
the
the
the prefs use
the
the
the
The 
-----------------------------------------------------------
File: /opt/riva/wav/ru-RU_sample.wav

Final transcripts: 
0 : The 

Timestamps: 
Word                                    Start (ms)      End (ms)        Confidence      

The                                     2160            2200            8.2716e-02      
Audio processed: 3.8400e+00 sec.
-----------------------------------------------------------
Not printing latency statistics because the client is run without the --simulate_realtime option and/or the number of requests sent is not equal to number of requests received. To get latency statistics, run with --simulate_realtime and set the --chunk_duration_ms to be the same as the server chunk duration
Run time: 9.6225e-01 sec.
Total audio processed: 7.6320e+00 sec.
Throughput: 7.9314e+00 RTFX

Not works…

Sorry, I’m making a multilingual service and I need all the languages ​​you have declared.
Why doesn’t ru-RU work?
I set everything up exactly the same way as with es-ES.

Thanks for your help!

Hi,

Based on the shared log, the ru-RU is functional.

Do you mean the accuracy is not as expected?
If so, could you try another file and share the output with us?

Thanks.