Riva Quickstart 2.1.0 installation fails on AGX Orin

Please provide the following information when requesting support.
Hardware - AGX Orin
Operating System Ubuntu 20.4 / Jetpack 5.0.1
Riva Version
TLT Version (if relevant)
How to reproduce the issue ? (This is for errors. Please share the command and the detailed log here)

I try to install the Riva quick start scripts following the instructions from the Riva Quick Start Guide.
When running the riva_start.sh script I get the following error:

markus@markus-orin:~/riva_quickstart_arm64_v2.1.0$ sudo bash riva_start.sh
Starting Riva Speech Services. This may take several minutes depending on the number of models deployed.
+ docker run -d --init --rm --gpus '"device=0"' -p 50051:50051 -e LD_PRELOAD= -e RIVA_API_KEY= -e RIVA_API_NGC_ORG= -e RIVA_EULA= -v /home/markus/riva_quickstart_arm64_v2.1.0/model_repository:/data --ulimit memlock=-1 --ulimit stack=67108864 --name riva-speech -p 8000:8000 -p 8001:8001 -p 8002:8002 -p 8888:8888 --device /dev/bus/usb --device /dev/snd nvcr.io/nvidia/riva/riva-speech:2.1.0-arm64-server riva_server
d3adc82824f26dd2453687facca7205ba37e30d1efbfe21232159e756ac6daac
+ [[ arm64 == \a\r\m\6\4 ]]
+ set -
+ docker exec riva-speech /opt/tritonserver/bin/tritonserver --model-store /data/models --model-control-mode=poll --log-info=true
Error response from daemon: Container d3adc82824f26dd2453687facca7205ba37e30d1efbfe21232159e756ac6daac is not running
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Health ready check failed.
Check Riva logs with: docker logs riva-speech
markus@markus-orin:~/riva_quickstart_arm64_v2.1.0$ docker logs riva-speech
Error: No such container: riva-speech

Any clue what could be wrong ?

Hi @user108910

Thanks for your interest in Riva,

Want to see the logs of riva_init.sh

Can you run the below command

bash riva_init.sh

and share us the complete output of the above command as file in this forum post

It would also be helpful to get the config.sh

Please find attached the two files. As far as I can judge, riva_init.sh runs without problems…

config.sh (10.8 KB)
riva_init.log (3.0 KB)

hi , i think the main problem is came from the riva-server , libcudart.so.10.2 is missing… try to run this command:
$ docker run -it --rm nvcr.io/nvidia/riva/riva-speech:2.1.0-arm64-server
in docker terminal run the riva_server command, the result will show libcudart.so.10.2 is missing …

nvidia@nvidia-orin-desktop:~$ docker logs riva-speech
riva_server: error while loading shared libraries: libcudart.so.10.2: cannot open shared object file: No such file or directory

Hi, it is as you describe in your post. I get the libcudart.so.10.2 error when I run the riva_server command. So far so good… Any hints how to resolve the issue…

I recommand waiting nvidia release the newest docker image that include cuda library, because it is hard to build cuda-10.2 with aarch64 into docker image (maybe can download cuda10.2 aarch64 deb via sdk manager, but i can’t make sure this is the correct way to install cuda-10.2 aarch64. the libcudart is in cuda-repo-l4t-10-2-local_10.2.460-1_arm64.deb)…

if you have the ability install the cuda-10.2 library to the nvcr.io/nvidia/riva/riva-speech:2.1.0-arm64-server , then you can use docker commit build new version of image . You also need to modify config.sh , replace parameter of image_speech_api (arround line 233) to the image that you commit early to ensure that riva_start.sh use correct docker image to run riva_server.

After solve the riva_server problem by install the cuda lib manually, i found another problem.
The docker image nvcr.io/nvidia/riva/riva-speech:2.1.0-arm64-server is based on ubuntu 18.04 , the default glibc version is 2.27)
but tritonserver need GLIBC_2.29 and GLIBCXX_3.4.26 ( is seem we need a ubuntu 20.04 version for tritron) , so any idea ?

/opt/tritonserver/bin/tritonserver: /lib/aarch64-linux-gnu/libm.so.6: version GLIBC_2.29' not found (required by /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so) /opt/tritonserver/bin/tritonserver: /usr/lib/aarch64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.26’ not found (required by /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so)
/opt/tritonserver/bin/tritonserver: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29’ not found (required by /usr/lib/aarch64-linux-gnu/tegra/libnvtvmr.so)

In this video on You Tube https://www.youtube.com/watch?v=nLYUCoPA3sc, one can see a Riva ASR demo on an AGX Orin. I wonder how Riva was installed given the problems described above. Is there an alternative to the Riva Quickstart 2.1.0 container ?

After solving the cuda library problem, i commit new version of image. and run the riva_start.sh. following result:

nvidia@nvidia-orin-desktop:~/exp/riva_quickstart_arm64_v2.0.0$ ./riva_start.sh
Starting Riva Speech Services. This may take several minutes depending on the number of models deployed.
a613d91d9e5e2a54d3aeba9f3925753c109255567602273e20181e89e6758d56
I0602 03:31:30.375615 12 pinned_memory_manager.cc:240] Pinned memory pool is created at ‘0x205227000’ with size 268435456
I0602 03:31:30.375802 12 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0602 03:31:30.380195 12 model_repository_manager.cc:994] loading: citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming:1
I0602 03:31:30.480413 12 model_repository_manager.cc:994] loading: citrinet-256-en-US-streaming-feature-extractor-streaming:1
I0602 03:31:30.491002 12 ctc-decoder-library.cc:21] TRITONBACKEND_ModelInitialize: citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming (version 1)
W:parameter_parser.cc:120: Parameter forerunner_start_offset_ms could not be set from parameters
W:parameter_parser.cc:121: Default value will be used
W:parameter_parser.cc:120: Parameter forerunner_start_offset_ms could not be set from parameters
W:parameter_parser.cc:121: Default value will be used
W:parameter_parser.cc:120: Parameter max_num_slots could not be set from parameters
W:parameter_parser.cc:121: Default value will be used
W:parameter_parser.cc:120: Parameter num_tokenization could not be set from parameters
W:parameter_parser.cc:121: Default value will be used
I0602 03:31:30.493238 12 backend_model.cc:255] model configuration:
{
“name”: “citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming”,
“platform”: “”,
“backend”: “riva_asr_decoder”,
“version_policy”: {
“latest”: {
“num_versions”: 1
}
},
“max_batch_size”: 1,
“input”: [
{
“name”: “CLASS_LOGITS”,
“data_type”: “TYPE_FP32”,
“format”: “FORMAT_NONE”,
“dims”: [
-1,
1025
],
“is_shape_tensor”: false,
“allow_ragged_batch”: false,
“optional”: false
},
{
“name”: “END_FLAG”,
“data_type”: “TYPE_UINT32”,
“format”: “FORMAT_NONE”,
“dims”: [
1
],
“is_shape_tensor”: false,
“allow_ragged_batch”: false,
“optional”: false
},
{
“name”: “SEGMENTS_START_END”,
“data_type”: “TYPE_INT32”,
“format”: “FORMAT_NONE”,
“dims”: [
-1,
2
],
“is_shape_tensor”: false,
“allow_ragged_batch”: false,
“optional”: false
},
{
“name”: “CUSTOM_CONFIGURATION”,
“data_type”: “TYPE_STRING”,
“format”: “FORMAT_NONE”,
“dims”: [
-1,
2
],
“is_shape_tensor”: false,
“allow_ragged_batch”: false,
“optional”: false
}
],
“output”: [
{
“name”: “FINAL_TRANSCRIPTS”,
“data_type”: “TYPE_STRING”,
“dims”: [
-1
],
“label_filename”: “”,
“is_shape_tensor”: false
},
{
“name”: “FINAL_TRANSCRIPTS_SCORE”,
“data_type”: “TYPE_FP32”,
“dims”: [
-1
],
“label_filename”: “”,
“is_shape_tensor”: false
},
{
“name”: “FINAL_WORDS_START_END”,
“data_type”: “TYPE_INT32”,
“dims”: [
-1,
2
],
“label_filename”: “”,
“is_shape_tensor”: false
},
{
“name”: “PARTIAL_TRANSCRIPTS”,
“data_type”: “TYPE_STRING”,
“dims”: [
-1
],
“label_filename”: “”,
“is_shape_tensor”: false
},
{
“name”: “PARTIAL_TRANSCRIPTS_STABILITY”,
“data_type”: “TYPE_FP32”,
“dims”: [
-1
],
“label_filename”: “”,
“is_shape_tensor”: false
},
{
“name”: “PARTIAL_WORDS_START_END”,
“data_type”: “TYPE_INT32”,
“dims”: [
-1,
2
],
“label_filename”: “”,
“is_shape_tensor”: false
}
],
“batch_input”: ,
“batch_output”: ,
“optimization”: {
“priority”: “PRIORITY_DEFAULT”,
“cuda”: {
“graphs”: false,
“busy_wait_events”: false,
“graph_spec”: ,
“output_copy_stream”: true
},
“input_pinned_memory”: {
“enable”: true
},
“output_pinned_memory”: {
“enable”: true
},
“gather_kernel_buffer_threshold”: 0,
“eager_batching”: false
},
“sequence_batching”: {
“oldest”: {
“max_candidate_sequences”: 1,
“preferred_batch_size”: [
1
],
“max_queue_delay_microseconds”: 1000
},
“max_sequence_idle_microseconds”: 60000000,
“control_input”: [
{
“name”: “START”,
“control”: [
{
“kind”: “CONTROL_SEQUENCE_START”,
“int32_false_true”: [
0,
1
],
“fp32_false_true”: ,
“bool_false_true”: ,
“data_type”: “TYPE_INVALID”
}
]
},
{
“name”: “READY”,
“control”: [
{
“kind”: “CONTROL_SEQUENCE_READY”,
“int32_false_true”: [
0,
1
],
“fp32_false_true”: ,
“bool_false_true”: ,
“data_type”: “TYPE_INVALID”
}
]
},
{
“name”: “END”,
“control”: [
{
“kind”: “CONTROL_SEQUENCE_END”,
“int32_false_true”: [
0,
1
],
“fp32_false_true”: ,
“bool_false_true”: ,
“data_type”: “TYPE_INVALID”
}
]
},
{
“name”: “CORRID”,
“control”: [
{
“kind”: “CONTROL_SEQUENCE_CORRID”,
“int32_false_true”: ,
“fp32_false_true”: ,
“bool_false_true”: ,
“data_type”: “TYPE_UINT64”
}
]
}
],
“state”:
},
“instance_group”: [
{
“name”: “citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming_0”,
“kind”: “KIND_CPU”,
“count”: 1,
“gpus”: ,
“secondary_devices”: ,
“profile”: ,
“passive”: false,
“host_policy”: “”
}
],
“default_model_filename”: “”,
“cc_model_filenames”: {},
“metric_tags”: {},
“parameters”: {
“use_vad”: {
“string_value”: “True”
},
“lm_weight”: {
“string_value”: “0.2”
},
“blank_token”: {
“string_value”: “#”
},
“vocab_file”: {
“string_value”: “/data/models/citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming/1/riva_decoder_vocabulary.txt”
},
“ms_per_timestep”: {
“string_value”: “80”
},
“streaming”: {
“string_value”: “True”
},
“use_subword”: {
“string_value”: “True”
},
“beam_size”: {
“string_value”: “16”
},
“right_padding_size”: {
“string_value”: “1.92”
},
“beam_size_token”: {
“string_value”: “16”
},
“sil_token”: {
“string_value”: “▁”
},
“beam_threshold”: {
“string_value”: “20.0”
},
“tokenizer_model”: {
“string_value”: “/data/models/citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming/1/50b43e991a5a4d26afab00bc85888757_tokenizer.model”
},
“language_model_file”: {
“string_value”: “/data/models/citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming/1/jarvis_asr_train_datasets_noSpgi_noLS_gt_3gram.binary”
},
“max_execution_batch_size”: {
“string_value”: “1024”
},
“forerunner_use_lm”: {
“string_value”: “true”
},
“forerunner_beam_size_token”: {
“string_value”: “8”
},
“forerunner_beam_threshold”: {
“string_value”: “10.0”
},
“asr_model_delay”: {
“string_value”: “-1”
},
“decoder_num_worker_threads”: {
“string_value”: “-1”
},
“word_insertion_score”: {
“string_value”: “0.2”
},
“left_padding_size”: {
“string_value”: “1.92”
},
“decoder_type”: {
“string_value”: “flashlight”
},
“compute_timestamps”: {
“string_value”: “True”
},
“forerunner_beam_size”: {
“string_value”: “8”
},
“max_supported_transcripts”: {
“string_value”: “1”
},
“chunk_size”: {
“string_value”: “0.16”
},
“lexicon_file”: {
“string_value”: “/data/models/citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming/1/lexicon.txt”
},
“smearing_mode”: {
“string_value”: “max”
}
},
“model_warmup”: ,
“model_transaction_policy”: {
“decoupled”: false
}
}
I0602 03:31:30.493504 12 ctc-decoder-library.cc:23] TRITONBACKEND_ModelInstanceInitialize: citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming_0 (device 0)
I0602 03:31:30.580701 12 model_repository_manager.cc:994] loading: citrinet-256-en-US-streaming-voice-activity-detector-ctc-streaming:1
I0602 03:31:30.681082 12 model_repository_manager.cc:994] loading: riva-trt-citrinet-256:1
Waiting for Riva server to load all models…retrying in 10 seconds
I0602 03:31:31.885521 12 model_repository_manager.cc:1149] successfully loaded ‘citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming’ version 1
I0602 03:31:31.886102 12 feature-extractor.cc:407] TRITONBACKEND_ModelInitialize: citrinet-256-en-US-streaming-feature-extractor-streaming (version 1)
W:parameter_parser.cc:120: Parameter is_dither_seed_random could not be set from parameters
W:parameter_parser.cc:121: Default value will be used
W:parameter_parser.cc:120: Parameter max_batch_size could not be set from parameters
W:parameter_parser.cc:121: Default value will be used
W:parameter_parser.cc:120: Parameter max_sequence_idle_microseconds could not be set from parameters
W:parameter_parser.cc:121: Default value will be used
W:parameter_parser.cc:120: Parameter preemph_coeff could not be set from parameters
W:parameter_parser.cc:121: Default value will be used
I0602 03:31:35.116544 12 backend_model.cc:255] model configuration:
{
“name”: “citrinet-256-en-US-streaming-feature-extractor-streaming”,
“platform”: “”,
“backend”: “riva_asr_features”,
“version_policy”: {
“latest”: {
“num_versions”: 1
}
},
“max_batch_size”: 1,
“input”: [
{
“name”: “AUDIO_SIGNAL”,
“data_type”: “TYPE_FP32”,
“format”: “FORMAT_NONE”,
“dims”: [
-1
],
“is_shape_tensor”: false,
“allow_ragged_batch”: false,
“optional”: false
},
{
“name”: “SAMPLE_RATE”,
“data_type”: “TYPE_UINT32”,
“format”: “FORMAT_NONE”,
“dims”: [
1
],
“is_shape_tensor”: false,
“allow_ragged_batch”: false,
“optional”: false
}
],
“output”: [
{
“name”: “AUDIO_FEATURES”,
“data_type”: “TYPE_FP32”,
“dims”: [
80,
-1
],
“label_filename”: “”,
“is_shape_tensor”: false
},
{
“name”: “AUDIO_PROCESSED”,
“data_type”: “TYPE_FP32”,
“dims”: [
1
],
“label_filename”: “”,
“is_shape_tensor”: false
}
],
“batch_input”: ,
“batch_output”: ,
“optimization”: {
“priority”: “PRIORITY_DEFAULT”,
“cuda”: {
“graphs”: false,
“busy_wait_events”: false,
“graph_spec”: ,
“output_copy_stream”: true
},
“input_pinned_memory”: {
“enable”: true
},
“output_pinned_memory”: {
“enable”: true
},
“gather_kernel_buffer_threshold”: 0,
“eager_batching”: false
},
“sequence_batching”: {
“oldest”: {
“max_candidate_sequences”: 1,
“preferred_batch_size”: [
1
],
“max_queue_delay_microseconds”: 1000
},
“max_sequence_idle_microseconds”: 60000000,
“control_input”: [
{
“name”: “START”,
“control”: [
{
“kind”: “CONTROL_SEQUENCE_START”,
“int32_false_true”: [
0,
1
],
“fp32_false_true”: ,
“bool_false_true”: ,
“data_type”: “TYPE_INVALID”
}
]
},
{
“name”: “READY”,
“control”: [
{
“kind”: “CONTROL_SEQUENCE_READY”,
“int32_false_true”: [
0,
1
],
“fp32_false_true”: ,
“bool_false_true”: ,
“data_type”: “TYPE_INVALID”
}
]
},
{
“name”: “END”,
“control”: [
{
“kind”: “CONTROL_SEQUENCE_END”,
“int32_false_true”: [
0,
1
],
“fp32_false_true”: ,
“bool_false_true”: ,
“data_type”: “TYPE_INVALID”
}
]
},
{
“name”: “CORRID”,
“control”: [
{
“kind”: “CONTROL_SEQUENCE_CORRID”,
“int32_false_true”: ,
“fp32_false_true”: ,
“bool_false_true”: ,
“data_type”: “TYPE_UINT64”
}
]
}
],
“state”:
},
“instance_group”: [
{
“name”: “citrinet-256-en-US-streaming-feature-extractor-streaming_0”,
“kind”: “KIND_GPU”,
“count”: 1,
“gpus”: [
0
],
“secondary_devices”: ,
“profile”: ,
“passive”: false,
“host_policy”: “”
}
],
“default_model_filename”: “”,
“cc_model_filenames”: {},
“metric_tags”: {},
“parameters”: {
“use_utterance_norm_params”: {
“string_value”: “False”
},
“precalc_norm_time_steps”: {
“string_value”: “0”
},
“dither”: {
“string_value”: “1e-05”
},
“precalc_norm_params”: {
“string_value”: “False”
},
“norm_per_feature”: {
“string_value”: “True”
},
“mean”: {
“string_value”: “-11.4412, -9.9334, -9.1292, -9.0365, -9.2804, -9.5643, -9.7342, -9.6925, -9.6333, -9.2808, -9.1887, -9.1422, -9.1397, -9.2028, -9.2749, -9.4776, -9.9185, -10.1557, -10.3800, -10.5067, -10.3190, -10.4728, -10.5529, -10.6402, -10.6440, -10.5113, -10.7395, -10.7870, -10.6074, -10.5033, -10.8278, -10.6384, -10.8481, -10.6875, -10.5454, -10.4747, -10.5165, -10.4930, -10.3413, -10.3472, -10.3735, -10.6830, -10.8813, -10.6338, -10.3856, -10.7727, -10.8957, -10.8068, -10.7373, -10.6108, -10.3405, -10.2889, -10.3922, -10.4946, -10.3367, -10.4164, -10.9949, -10.7196, -10.3971, -10.1734, -9.9257, -9.6557, -9.1761, -9.6653, -9.7876, -9.7230, -9.7792, -9.7056, -9.2702, -9.4650, -9.2755, -9.1369, -9.1174, -8.9197, -8.5394, -8.2614, -8.1353, -8.1422, -8.3430, -8.6655”
},
“stddev”: {
“string_value”: “2.2668, 3.1642, 3.7079, 3.7642, 3.5349, 3.5901, 3.7640, 3.8424, 4.0145, 4.1475, 4.0457, 3.9048, 3.7709, 3.6117, 3.3188, 3.1489, 3.0615, 3.0362, 2.9929, 3.0500, 3.0341, 3.0484, 3.0103, 2.9474, 2.9128, 2.8669, 2.8332, 2.9411, 3.0378, 3.0712, 3.0190, 2.9992, 3.0124, 3.0024, 3.0275, 3.0870, 3.0656, 3.0142, 3.0493, 3.1373, 3.1135, 3.0675, 2.8828, 2.7018, 2.6296, 2.8826, 2.9325, 2.9288, 2.9271, 2.9890, 3.0137, 2.9855, 3.0839, 2.9319, 2.3512, 2.3795, 2.6191, 2.7555, 2.9326, 2.9931, 3.1543, 3.0855, 2.6820, 3.0566, 3.1272, 3.1663, 3.1836, 3.0018, 2.9089, 3.1727, 3.1626, 3.1086, 2.9804, 3.1107, 3.2998, 3.3697, 3.3716, 3.2487, 3.1597, 3.1181”
},
“chunk_size”: {
“string_value”: “0.16”
},
“max_execution_batch_size”: {
“string_value”: “1”
},
“sample_rate”: {
“string_value”: “16000”
},
“window_stride”: {
“string_value”: “0.01”
},
“window_size”: {
“string_value”: “0.025”
},
“num_features”: {
“string_value”: “80”
},
“streaming”: {
“string_value”: “True”
},
“transpose”: {
“string_value”: “False”
},
“stddev_floor”: {
“string_value”: “1e-05”
},
“left_padding_size”: {
“string_value”: “1.92”
},
“right_padding_size”: {
“string_value”: “1.92”
},
“gain”: {
“string_value”: “1.0”
}
},
“model_warmup”: ,
“model_transaction_policy”: {
“decoupled”: false
}
}
I0602 03:31:35.118372 12 vad_library.cc:19] TRITONBACKEND_ModelInitialize: citrinet-256-en-US-streaming-voice-activity-detector-ctc-streaming (version 1)
W:parameter_parser.cc:120: Parameter max_execution_batch_size could not be set from parameters
W:parameter_parser.cc:121: Default value will be used
W:parameter_parser.cc:120: Parameter max_execution_batch_size could not be set from parameters
W:parameter_parser.cc:121: Default value will be used
I0602 03:31:35.119653 12 backend_model.cc:255] model configuration:
{
“name”: “citrinet-256-en-US-streaming-voice-activity-detector-ctc-streaming”,
“platform”: “”,
“backend”: “riva_asr_vad”,
“version_policy”: {
“latest”: {
“num_versions”: 1
}
},
“max_batch_size”: 1,
“input”: [
{
“name”: “CLASS_LOGITS”,
“data_type”: “TYPE_FP32”,
“format”: “FORMAT_NONE”,
“dims”: [
-1,
1025
],
“is_shape_tensor”: false,
“allow_ragged_batch”: false,
“optional”: false
}
],
“output”: [
{
“name”: “SEGMENTS_START_END”,
“data_type”: “TYPE_INT32”,
“dims”: [
-1,
2
],
“label_filename”: “”,
“is_shape_tensor”: false
}
],
“batch_input”: ,
“batch_output”: ,
“optimization”: {
“priority”: “PRIORITY_DEFAULT”,
“cuda”: {
“graphs”: false,
“busy_wait_events”: false,
“graph_spec”: ,
“output_copy_stream”: true
},
“input_pinned_memory”: {
“enable”: true
},
“output_pinned_memory”: {
“enable”: true
},
“gather_kernel_buffer_threshold”: 0,
“eager_batching”: false
},
“sequence_batching”: {
“max_sequence_idle_microseconds”: 60000000,
“control_input”: [
{
“name”: “START”,
“control”: [
{
“kind”: “CONTROL_SEQUENCE_START”,
“int32_false_true”: [
0,
1
],
“fp32_false_true”: ,
“bool_false_true”: ,
“data_type”: “TYPE_INVALID”
}
]
},
{
“name”: “READY”,
“control”: [
{
“kind”: “CONTROL_SEQUENCE_READY”,
“int32_false_true”: [
0,
1
],
“fp32_false_true”: ,
“bool_false_true”: ,
“data_type”: “TYPE_INVALID”
}
]
}
],
“state”:
},
“instance_group”: [
{
“name”: “citrinet-256-en-US-streaming-voice-activity-detector-ctc-streaming_0”,
“kind”: “KIND_CPU”,
“count”: 1,
“gpus”: ,
“secondary_devices”: ,
“profile”: ,
“passive”: false,
“host_policy”: “”
}
],
“default_model_filename”: “”,
“cc_model_filenames”: {},
“metric_tags”: {},
“parameters”: {
“residue_blanks_at_end”: {
“string_value”: “0”
},
“vad_start_history”: {
“string_value”: “300”
},
“vad_stop_history”: {
“string_value”: “800”
},
“chunk_size”: {
“string_value”: “0.16”
},
“vad_start_th”: {
“string_value”: “0.2”
},
“vad_stop_th”: {
“string_value”: “0.98”
},
“vad_type”: {
“string_value”: “ctc-vad”
},
“vocab_file”: {
“string_value”: “/data/models/citrinet-256-en-US-streaming-voice-activity-detector-ctc-streaming/1/riva_decoder_vocabulary.txt”
},
“residue_blanks_at_start”: {
“string_value”: “-2”
},
“ms_per_timestep”: {
“string_value”: “80”
},
“streaming”: {
“string_value”: “True”
},
“use_subword”: {
“string_value”: “True”
}
},
“model_warmup”: ,
“model_transaction_policy”: {
“decoupled”: false
}
}
I0602 03:31:35.119695 12 feature-extractor.cc:409] TRITONBACKEND_ModelInstanceInitialize: citrinet-256-en-US-streaming-feature-extractor-streaming_0 (device 0)
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Health ready check failed.
Check Riva logs with: docker logs riva-speech
nvidia@nvidia-orin-desktop:~/exp/riva_quickstart_arm64_v2.0.0$ I0602 03:37:23.247325 12 vad_library.cc:22] TRITONBACKEND_ModelInstanceInitialize: citrinet-256-en-US-streaming-voice-activity-detector-ctc-streaming_0 (device 0)
E0602 03:37:23.247325 12 model_repository_manager.cc:1152] failed to load ‘riva-trt-citrinet-256’ version 1: Not found: unable to load shared library: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29’ not found (required by /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so)
I0602 03:37:23.247730 12 model_repository_manager.cc:1149] successfully loaded ‘citrinet-256-en-US-streaming-feature-extractor-streaming’ version 1
I0602 03:37:23.248243 12 model_repository_manager.cc:1149] successfully loaded ‘citrinet-256-en-US-streaming-voice-activity-detector-ctc-streaming’ version 1
E0602 03:37:23.248288 12 model_repository_manager.cc:1332] Invalid argument: ensemble ‘citrinet-256-en-US-streaming’ depends on ‘riva-trt-citrinet-256’ which has no loaded version
I0602 03:37:23.248356 12 server.cc:522]
±-----------------±-----+
| Repository Agent | Path |
±-----------------±-----+
±-----------------±-----+

I0602 03:37:23.248405 12 server.cc:549]
±------------------±----------------------------------------------------------------------------±-------+
| Backend | Path | Config |
±------------------±----------------------------------------------------------------------------±-------+
| riva_asr_features | /opt/tritonserver/backends/riva_asr_features/libtriton_riva_asr_features.so | {} |
| riva_asr_vad | /opt/tritonserver/backends/riva_asr_vad/libtriton_riva_asr_vad.so | {} |
| riva_asr_decoder | /opt/tritonserver/backends/riva_asr_decoder/libtriton_riva_asr_decoder.so | {} |
±------------------±----------------------------------------------------------------------------±-------+
I0602 03:37:23.248481 12 server.cc:592]
±-------------------------------------------------------------------±--------±--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Model | Version | Status |
±-------------------------------------------------------------------±--------±--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming | 1 | READY |
| citrinet-256-en-US-streaming-feature-extractor-streaming | 1 | READY |
| citrinet-256-en-US-streaming-voice-activity-detector-ctc-streaming | 1 | READY |
| riva-trt-citrinet-256 | 1 | UNAVAILABLE: Not found: unable to load shared library: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29’ not found (required by /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so) |
±-------------------------------------------------------------------±--------±--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
W0602 03:37:23.248496 12 metrics.cc:301] Neither cache metrics nor gpu metrics are enabled. Not polling for them.
I0602 03:37:23.248595 12 tritonserver.cc:1932]
±---------------------------------±---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option | Value |
±---------------------------------±---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.19.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace |
| model_repository_path[0] | /data/models |
| model_control_mode | MODE_POLL |
| strict_model_config | 1 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| response_cache_byte_size | 0 |
| min_supported_compute_capability | 5.3 |
| strict_readiness | 1 |
| exit_timeout | 30 |
±---------------------------------±---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0602 03:37:23.248607 12 server.cc:252] Waiting for in-flight requests to complete.
I0602 03:37:23.248612 12 model_repository_manager.cc:1026] unloading: citrinet-256-en-US-streaming-voice-activity-detector-ctc-streaming:1
I0602 03:37:23.248655 12 model_repository_manager.cc:1026] unloading: citrinet-256-en-US-streaming-feature-extractor-streaming:1
I0602 03:37:23.248706 12 model_repository_manager.cc:1026] unloading: citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming:1
I0602 03:37:23.248714 12 vad_library.cc:24] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0602 03:37:23.248749 12 server.cc:267] Timeout 30: Found 3 live models and 0 in-flight non-inference requests
I0602 03:37:23.248942 12 feature-extractor.cc:411] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0602 03:37:23.249270 12 ctc-decoder-library.cc:25] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0602 03:37:23.249300 12 vad_library.cc:20] TRITONBACKEND_ModelFinalize: delete model state
I0602 03:37:23.250019 12 feature-extractor.cc:408] TRITONBACKEND_ModelFinalize: delete model state
I0602 03:37:23.250515 12 model_repository_manager.cc:1132] successfully unloaded ‘citrinet-256-en-US-streaming-feature-extractor-streaming’ version 1
I0602 03:37:23.253483 12 model_repository_manager.cc:1132] successfully unloaded ‘citrinet-256-en-US-streaming-voice-activity-detector-ctc-streaming’ version 1
I0602 03:37:23.551786 12 ctc-decoder-library.cc:22] TRITONBACKEND_ModelFinalize: delete model state
I0602 03:37:23.653590 12 model_repository_manager.cc:1132] successfully unloaded ‘citrinet-256-en-US-streaming-ctc-decoder-cpu-streaming’ version 1
I0602 03:37:24.248844 12 server.cc:267] Timeout 29: Found 0 live models and 0 in-flight non-inference requests
error: creating server: Internal - failed to load all models

UNAVAILABLE: Not found: unable to load shared library: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29’ not found (required by /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so)

As the result , libm.so.6 need version GLIBC_2.29… but ubuntu 18.04 GLIBC version is 2.27…

HI @user108910 and @sky_4490hffr2

Thanks for your interest in Riva,

@sky_4490hffr2 appreciate your inputs, Thanks,

I have some updates further from the team,

The Orin does not support Riva 2.1.0

It is supported in the new 2.2.0 release, the 2.2.0 release also has some fixes coming in for it regarding TTS, so soon we will have 2.2.1 release within a week or so,

So please try with the new version (perhaps 2.2.1) and let us know if you face any issues

Thanks

I still wonder how it is possible that a range of RIVA demos on the AGX Orin are out there, e.g AGX Orin Riva Demo. There must have be a way to install Riva on the AGX Orin. Please comment…

new v2.2.1 already release. So far , quick start guide run well.

I0610 07:09:01.408912 25 grpc_server.cc:4421] Started GRPCInferenceService at 0.0.0.0:8001
I0610 07:09:01.409403 25 http_server.cc:3113] Started HTTPService at 0.0.0.0:8000
I0610 07:09:01.452001 25 http_server.cc:178] Started Metrics Service at 0.0.0.0:8002
Riva server is ready…
Use this container terminal to run applications:

root@9dd03d0ea818:/# riva_tts_client --voice_name=English-US-Female-1 \
–text=“Hello, this is a speech synthesizer.”
–audio_file=/work/wav/output.wav
I0610 08:05:52.858871 216 riva_tts_client.cc:103] Using Insecure Server Credentials
Request time: 1.83393 s
Got 211968 bytes back from server

But I faced a error on Xavier NX (v2.6.0):
Model is not available on server: Voice English-US-Female-1 for language en-US not found. Please specify the voice name in your SynthesizeSpeechRequest.
Input was: ‘Hello, this is a speech synthesizer.’

Can some one help to solve this error? Thanks in advance!

hello @rvinobha
I faced a error on Xavier NX (v2.6.0):
Model is not available on server: Voice English-US-Female-1 for language en-US not found. Please specify the voice name in your SynthesizeSpeechRequest.
Input was: ‘Hello, this is a speech synthesizer.’

Do you have a solution for this error? Thanks in advance!