Is Nvidia Riva - not setup to run on jetpack 6.2?

When running jetpack nano orin - i have problems isntalling the Riva - when running jetpack 6.2 - i see on the install site that it need jetpack 6.0 - but i simply dont know if that means only 6.0 or it means 6.0 or higher?

Could you please share the errors seen? That will help to understand if the issue is related to Jetpack version or general setup issue. The dependencies are packaged within Riva container, so any Jetpack 6.x installed natively should be ok.

first it completes like this:

root@dafirm-orin:/# cd /riva_quickstart_arm64_2.18.0
root@dafirm-orin:/riva_quickstart_arm64_2.18.0# bash riva_init.sh
Please enter API key for ngc.nvidia.com: 
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
  > Image nvcr.io/nvidia/riva/riva-speech:2.18.0-l4t-aarch64 exists. Skipping.

Downloading models (RMIRs) from NGC...
Note: this may take some time, depending on the speed of your Internet connection.
To skip this process and use existing RMIRs set the location and corresponding flag in config.sh.
2025-02-05 10:09:11 URL:https://xfiles.ngc.nvidia.com/org/nvidia/team/ngc-apps/recipes/ngc_cli/versions/3.48.0/files/ngccli_arm64.zip?versionId=sPn0KF0IeLN_9vFxB35JiAi3I4VPz.AW&Expires=1738836545&Signature=zl9~pE9n5dCOqgeexZh03y6GLaJXwBpOLw8qbWG~T62XrWY~ZERBcm57Fu~B721HKo8gl4rjsS~i13FHRlEe5jbbwtTSEfF2zPY2CWKEbsevrCGye1OkkbYaMv6dbCdvVw19p2cX-ke7mekRddA7YEIZ4oJN-5G2LX2Cjie5m1MA5Iz~I58ZFcmGlE9dv967uIO4m46VbPtsS5KUfcLxIwqvOYPPpRQqvkZmG7RdcgO-sVJdWLU4WGt1G4uE-nI4Ky2ANpXBANno9v9uv3MmwqQSk7t2~zC5-xaVlGE842JMLrX8tFnq3zZ8F~GslkybvbF6gwYHXvShZbHLajUjqA__&Key-Pair-Id=KCX06E8E9L60W [50007324/50007324] -> "ngccli_arm64.zip" [1]
/opt/riva

CLI_VERSION: Latest - 3.59.0 available (current: 3.48.0). Please update by using the command 'ngc version upgrade' 

Getting files to download...
  ━━ • … • Remaining: 0… • … • Elapsed: 0… • Total: 1 - Completed: 1 - Failed: 0
       …                   …                                                    

--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_asr_conformer_en_us_str_v2.18.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 802.72 MB
   Started at: 2025-02-05 10:09:17
   Completed at: 2025-02-05 10:10:33
   Duration taken: 1m 16s
--------------------------------------------------------------------------------
Getting files to download...
  ━━ • … • Remaining: 0… • … • Elapsed: 0… • Total: 1 - Completed: 1 - Failed: 0
       …                   …                                                    

--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_nlp_punctuation_bert_base_en_us_v2.18.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 191.75 MB
   Started at: 2025-02-05 10:10:37
   Completed at: 2025-02-05 10:10:57
   Duration taken: 20s
--------------------------------------------------------------------------------
Getting files to download...
  ━━ • … • Remaining: 0… • … • Elapsed: 0… • Total: 1 - Completed: 1 - Failed: 0
       …                   …                                                    

--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_nlp_punctuation_bert_base_en_us_v2.18.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 191.75 MB
   Started at: 2025-02-05 10:11:00
   Completed at: 2025-02-05 10:11:19
   Duration taken: 18s
--------------------------------------------------------------------------------
Getting files to download...
  ━━ • … • Remaining: 0… • … • Elapsed: 0… • Total: 1 - Completed: 1 - Failed: 0
       …                   …                                                    

--------------------------------------------------------------------------------
   Download status: COMPLETED
   Downloaded local path model: /tmp/artifacts/models_tts_fastpitch_hifigan_en_us_ipa_v2.18.0-tegra-orin
   Total files downloaded: 1
   Total transferred: 187.44 MB
   Started at: 2025-02-05 10:11:22
   Completed at: 2025-02-05 10:11:40
   Duration taken: 17s
--------------------------------------------------------------------------------

+ [[ tegra != \t\e\g\r\a ]]
+ [[ tegra == \t\e\g\r\a ]]
+ '[' -d /riva_quickstart_arm64_2.18.0/model_repository/rmir ']'
+ [[ tegra == \t\e\g\r\a ]]
+ '[' -d /riva_quickstart_arm64_2.18.0/model_repository/prebuilt ']'
+ echo 'Converting prebuilts at /riva_quickstart_arm64_2.18.0/model_repository/prebuilt to Riva Model repository.'
Converting prebuilts at /riva_quickstart_arm64_2.18.0/model_repository/prebuilt to Riva Model repository.
+ docker run -it -d --rm -v /riva_quickstart_arm64_2.18.0/model_repository:/data --name riva-models-extract nvcr.io/nvidia/riva/riva-speech:2.18.0-l4t-aarch64
+ docker exec riva-models-extract bash -c 'mkdir -p /data/models; \
      for file in /data/prebuilt/*.tar.gz; do tar xf $file -C /data/models/ &> /dev/null; done'
+ docker container stop riva-models-extract
+ '[' 0 -ne 0 ']'
+ echo

+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.

then:

type or proot@dafirm-orin:/riva_quickstart_arm64_2.18.0# bash riva_start.sh
Starting Riva Speech Services. This may take several minutes depending on the number of models deployed.
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Health ready check failed.
Check Riva logs with: docker logs riva-speech
aste code here

and then i check the log:

e-capability\":\"5.300000\",\"default-max-batch-size\":\"4\"}}"
I0205 10:16:00.369686 21 tensorrt.cc:231] "TRITONBACKEND_ModelInitialize: riva-trt-conformer-en-US-asr-streaming-am-streaming (version 1)"
I0205 10:16:00.373980 21 tensorrt.cc:297] "TRITONBACKEND_ModelInstanceInitialize: riva-trt-conformer-en-US-asr-streaming-am-streaming_0_0 (GPU device 0)"
I0205 10:16:00.380763 21 onnxruntime.cc:2718] "TRITONBACKEND_Initialize: onnxruntime"
I0205 10:16:00.380844 21 onnxruntime.cc:2728] "Triton TRITONBACKEND API version: 1.19"
I0205 10:16:00.380858 21 onnxruntime.cc:2734] "'onnxruntime' TRITONBACKEND API version: 1.16"
I0205 10:16:00.380872 21 onnxruntime.cc:2764] "backend configuration:\n{\"cmdline\":{\"auto-complete-config\":\"false\",\"backend-directory\":\"/opt/tritonserver/backends\",\"min-compute-capability\":\"5.300000\",\"default-max-batch-size\":\"4\"}}"
I0205 10:16:00.438329 21 onnxruntime.cc:2829] "TRITONBACKEND_ModelInitialize: riva-onnx-fastpitch_encoder-English-US (version 1)"
  > Riva waiting for Triton server to load all models...retrying in 1 second
I0205 10:16:00.774895 21 onnxruntime.cc:2894] "TRITONBACKEND_ModelInstanceInitialize: riva-onnx-fastpitch_encoder-English-US_0_0 (GPU device 0)"
I0205 10:16:00.902183 21 logging.cc:46] "Loaded engine size: 244 MiB"
W0205 10:16:00.946866 21 logging.cc:43] "Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors."
I0205 10:16:01.208490 21 logging.cc:46] "[MS] Running engine with multi stream info"
I0205 10:16:01.208550 21 logging.cc:46] "[MS] Number of aux streams is 1"
I0205 10:16:01.208559 21 logging.cc:46] "[MS] Number of total worker streams is 2"
I0205 10:16:01.208565 21 logging.cc:46] "[MS] The main stream provided by execute/enqueue calls is the first worker stream"
I0205 10:16:01.565860 21 pipeline_library.cc:24] "TRITONBACKEND_ModelInitialize: riva-punctuation-en-US (version 1)"
I0205 10:16:01.567694 21 backend_model.cc:303] "model configuration:\n{\n    \"name\": \"riva-punctuation-en-US\",\n    \"platform\": \"\",\n    \"backend\": \"riva_nlp_pipeline\",\n    \"runtime\": \"\",\n    \"version_policy\": {\n        \"latest\": {\n            \"num_versions\": 1\n        }\n    },\n    \"max_batch_size\": 1,\n    \"input\": [\n        {\n            \"name\": \"PIPELINE_INPUT\",\n            \"data_type\": \"TYPE_STRING\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        }\n    ],\n    \"output\": [\n        {\n            \"name\": \"PIPELINE_OUTPUT\",\n            \"data_type\": \"TYPE_STRING\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        }\n    ],\n    \"batch_input\": [],\n    \"batch_output\": [],\n    \"optimization\": {\n        \"priority\": \"PRIORITY_DEFAULT\",\n        \"input_pinned_memory\": {\n            \"enable\": true\n        },\n        \"output_pinned_memory\": {\n            \"enable\": true\n        },\n        \"gather_kernel_buffer_threshold\": 0,\n        \"eager_batching\": false\n    },\n    \"instance_group\": [\n        {\n            \"name\": \"riva-punctuation-en-US_0\",\n            \"kind\": \"KIND_CPU\",\n            \"count\": 1,\n            \"gpus\": [],\n            \"secondary_devices\": [],\n            \"profile\": [],\n            \"passive\": false,\n            \"host_policy\": \"\"\n        }\n    ],\n    \"default_model_filename\": \"\",\n    \"cc_model_filenames\": {},\n    \"metric_tags\": {},\n    \"parameters\": {\n        \"vocab\": {\n            \"string_value\": \"f92889b136d2433693cb9127e1aea218_vocab.txt\"\n        },\n        \"pipeline_type\": {\n            \"string_value\": \"punctuation\"\n        },\n        \"bos_token\": {\n            \"string_value\": \"[CLS]\"\n        },\n        \"input_ids_tensor_name\": {\n            \"string_value\": \"input_ids\"\n        },\n        \"unk_token\": {\n            \"string_value\": \"[UNK]\"\n        },\n        \"pad_chars_with_spaces\": {\n            \"string_value\": \"False\"\n        },\n        \"token_type_tensor_name\": {\n            \"string_value\": \"token_type_ids\"\n        },\n        \"attn_mask_tensor_name\": {\n            \"string_value\": \"attention_mask\"\n        },\n        \"capit_logits_tensor_name\": {\n            \"string_value\": \"capit_logits\"\n        },\n        \"tokenizer\": {\n            \"string_value\": \"wordpiece\"\n        },\n        \"capitalization_mapping_path\": {\n            \"string_value\": \"a4ed235fb32c44e58eab5854d3cd94f8_capit_label_ids.csv\"\n        },\n        \"model_family\": {\n            \"string_value\": \"riva\"\n        },\n        \"language_code\": {\n            \"string_value\": \"en-US\"\n        },\n        \"remove_spaces\": {\n            \"string_value\": \"False\"\n        },\n        \"use_int64_nn_inputs\": {\n            \"string_value\": \"False\"\n        },\n        \"eos_token\": {\n            \"string_value\": \"[SEP]\"\n        },\n        \"model_name\": {\n            \"string_value\": \"riva-trt-riva-punctuation-en-US-nn-bert-base-uncased\"\n        },\n        \"delimiter\": {\n            \"string_value\": \" \"\n        },\n        \"model_api\": {\n            \"string_value\": \"PunctuateText\"\n        },\n        \"unicode_normalize\": {\n            \"string_value\": \"False\"\n        },\n        \"preserve_accents\": {\n            \"string_value\": \"false\"\n        },\n        \"load_model\": {\n            \"string_value\": \"false\"\n        },\n        \"code_point_filename\": {\n            \"string_value\": \"cp_data.json\"\n        },\n        \"punct_logits_tensor_name\": {\n            \"string_value\": \"punct_logits\"\n        },\n        \"punctuation_mapping_path\": {\n            \"string_value\": \"fe160f3a917d411b99852e509e3279a3_punct_label_ids.csv\"\n        },\n        \"to_lower\": {\n            \"string_value\": \"true\"\n        },\n        \"tokenizer_to_lower\": {\n            \"string_value\": \"true\"\n        }\n    },\n    \"model_warmup\": []\n}"
I0205 10:16:01.568022 21 pipeline_library.cc:28] "TRITONBACKEND_ModelInstanceInitialize: riva-punctuation-en-US_0_0 (device 0)"
  > Riva waiting for Triton server to load all models...retrying in 1 second
I0205 10:16:01.688529 21 pipeline_library.cc:22] "TRITONBACKEND_ModelInitialize: conformer-en-US-asr-streaming-asr-bls-ensemble (version 1)"
Found yaml file: /data/models/conformer-en-US-asr-streaming-asr-bls-ensemble/1/riva_bls_config.yaml
I0205 10:16:01.747708 21 backend_model.cc:303] "model configuration:\n{\n    \"name\": \"conformer-en-US-asr-streaming-asr-bls-ensemble\",\n    \"platform\": \"\",\n    \"backend\": \"riva_asr_ensemble_pipeline\",\n    \"runtime\": \"\",\n    \"version_policy\": {\n        \"latest\": {\n            \"num_versions\": 1\n        }\n    },\n    \"max_batch_size\": 1024,\n    \"input\": [\n        {\n            \"name\": \"PIPELINE_INPUT\",\n            \"data_type\": \"TYPE_STRING\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        }\n    ],\n    \"output\": [\n        {\n            \"name\": \"PIPELINE_OUTPUT\",\n            \"data_type\": \"TYPE_STRING\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        }\n    ],\n    \"batch_input\": [],\n    \"batch_output\": [],\n    \"optimization\": {\n        \"graph\": {\n            \"level\": 0\n        },\n        \"priority\": \"PRIORITY_DEFAULT\",\n        \"cuda\": {\n            \"graphs\": false,\n            \"busy_wait_events\": false,\n            \"graph_spec\": [],\n            \"output_copy_stream\": true\n        },\n        \"input_pinned_memory\": {\n            \"enable\": true\n        },\n        \"output_pinned_memory\": {\n            \"enable\": true\n        },\n        \"gather_kernel_buffer_threshold\": 0,\n        \"eager_batching\": false\n    },\n    \"sequence_batching\": {\n        \"oldest\": {\n            \"max_candidate_sequences\": 1024,\n            \"preferred_batch_size\": [\n                64,\n                128\n            ],\n            \"max_queue_delay_microseconds\": 1000,\n            \"preserve_ordering\": false\n        },\n        \"max_sequence_idle_microseconds\": 60000000,\n        \"control_input\": [\n            {\n                \"name\": \"START\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_START\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"READY\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_READY\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"END\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_END\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"CORRID\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_CORRID\",\n                        \"int32_false_true\": [],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_UINT64\"\n                    }\n                ]\n            }\n        ],\n        \"state\": [],\n        \"iterative_sequence\": false\n    },\n    \"instance_group\": [\n        {\n            \"name\": \"conformer-en-US-asr-streaming-asr-bls-ensemble_0\",\n            \"kind\": \"KIND_CPU\",\n            \"count\": 1,\n            \"gpus\": [],\n            \"secondary_devices\": [],\n            \"profile\": [],\n            \"passive\": false,\n            \"host_policy\": \"\"\n        }\n    ],\n    \"default_model_filename\": \"\",\n    \"cc_model_filenames\": {},\n    \"metric_tags\": {},\n    \"parameters\": {\n        \"yaml_parameters_file\": {\n            \"string_value\": \"riva_bls_config.yaml\"\n        },\n        \"offline\": {\n            \"string_value\": \"False\"\n        },\n        \"language_code\": {\n            \"string_value\": \"en-US\"\n        },\n        \"streaming\": {\n            \"string_value\": \"True\"\n        },\n        \"type\": {\n            \"string_value\": \"online\"\n        },\n        \"sample_rate\": {\n            \"string_value\": \"16000\"\n        },\n        \"model_family\": {\n            \"string_value\": \"riva\"\n        }\n    },\n    \"model_warmup\": [],\n    \"model_transaction_policy\": {\n        \"decoupled\": true\n    }\n}"
I0205 10:16:01.748424 21 pipeline_library.cc:26] "TRITONBACKEND_ModelInstanceInitialize: conformer-en-US-asr-streaming-asr-bls-ensemble_0_0 (device 0)"
I0205 10:16:02.025616 21 model_lifecycle.cc:839] "successfully loaded 'riva-punctuation-en-US'"
I0205 10:16:02.025984 21 tensorrt.cc:231] "TRITONBACKEND_ModelInitialize: riva-trt-hifigan-English-US (version 1)"
I0205 10:16:02.027078 21 backend_model.cc:281] "Overriding execution policy to \"TRITONBACKEND_EXECUTION_BLOCKING\" for sequence model \"riva-trt-hifigan-English-US\""
I0205 10:16:02.027138 21 tensorrt.cc:297] "TRITONBACKEND_ModelInstanceInitialize: riva-trt-hifigan-English-US_0_0 (GPU device 0)"
I0205 10:16:02.032448 21 logging.cc:46] "The logger passed into createInferRuntime differs from one already provided for an existing builder, runtime, or refitter. Uses of the global logger, returned by nvinfer1::getLogger(), will return the existing value."
I0205 10:16:02.073310 21 logging.cc:46] "Loaded engine size: 28 MiB"
W0205 10:16:02.074079 21 logging.cc:43] "Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors."
I0205 10:16:02.181389 21 logging.cc:46] "[MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +1, GPU +32, now: CPU 1, GPU 300 (MiB)"
I0205 10:16:02.184625 21 instance_state.cc:186] "Created instance riva-trt-hifigan-English-US_0_0 on GPU 0 with stream priority 0 and optimization profile default[0];"
I0205 10:16:02.185870 21 model_lifecycle.cc:839] "successfully loaded 'riva-trt-hifigan-English-US'"
I0205 10:16:02.186318 21 tensorrt.cc:231] "TRITONBACKEND_ModelInitialize: riva-trt-riva-punctuation-en-US-nn-bert-base-uncased (version 1)"
I0205 10:16:02.187287 21 tensorrt.cc:297] "TRITONBACKEND_ModelInstanceInitialize: riva-trt-riva-punctuation-en-US-nn-bert-base-uncased_0_0 (GPU device 0)"
I0205 10:16:02.191712 21 logging.cc:46] "The logger passed into createInferRuntime differs from one already provided for an existing builder, runtime, or refitter. Uses of the global logger, returned by nvinfer1::getLogger(), will return the existing value."
I0205 10:16:02.555095 21 logging.cc:46] "[MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +1, GPU +69, now: CPU 1, GPU 300 (MiB)"
I0205 10:16:02.555815 21 instance_state.cc:186] "Created instance riva-trt-conformer-en-US-asr-streaming-am-streaming_0_0 on GPU 0 with stream priority 0 and optimization profile default[0];"
I0205 10:16:02.556999 21 model_lifecycle.cc:839] "successfully loaded 'riva-trt-conformer-en-US-asr-streaming-am-streaming'"
I0205 10:16:02.560333 21 spectrogram-chunker.cc:274] "TRITONBACKEND_ModelInitialize: spectrogram_chunker-English-US (version 1)"
I0205 10:16:02.561412 21 backend_model.cc:303] "model configuration:\n{\n    \"name\": \"spectrogram_chunker-English-US\",\n    \"platform\": \"\",\n    \"backend\": \"riva_tts_chunker\",\n    \"runtime\": \"\",\n    \"version_policy\": {\n        \"latest\": {\n            \"num_versions\": 1\n        }\n    },\n    \"max_batch_size\": 1,\n    \"input\": [\n        {\n            \"name\": \"SPECTROGRAM\",\n            \"data_type\": \"TYPE_FP32\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                80,\n                -1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"IS_LAST_SENTENCE\",\n            \"data_type\": \"TYPE_INT32\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"NUM_VALID_FRAMES_IN\",\n            \"data_type\": \"TYPE_INT64\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"SENTENCE_NUM\",\n            \"data_type\": \"TYPE_INT32\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"DURATIONS\",\n            \"data_type\": \"TYPE_FP32\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                -1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"PROCESSED_TEXT\",\n            \"data_type\": \"TYPE_STRING\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"VOLUME\",\n            \"data_type\": \"TYPE_FP32\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                -1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        }\n    ],\n    \"output\": [\n        {\n            \"name\": \"SPECTROGRAM_CHUNK\",\n            \"data_type\": \"TYPE_FP32\",\n            \"dims\": [\n                80,\n                -1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"END_FLAG\",\n            \"data_type\": \"TYPE_INT32\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"NUM_VALID_SAMPLES_OUT\",\n            \"data_type\": \"TYPE_INT32\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"SENTENCE_NUM\",\n            \"data_type\": \"TYPE_INT32\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"DURATIONS\",\n            \"data_type\": \"TYPE_FP32\",\n            \"dims\": [\n                -1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"PROCESSED_TEXT\",\n            \"data_type\": \"TYPE_STRING\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"VOLUME\",\n            \"data_type\": \"TYPE_FP32\",\n            \"dims\": [\n                -1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        }\n    ],\n    \"batch_input\": [],\n    \"batch_output\": [],\n    \"optimization\": {\n        \"priority\": \"PRIORITY_DEFAULT\",\n        \"input_pinned_memory\": {\n            \"enable\": true\n        },\n        \"output_pinned_memory\": {\n            \"enable\": true\n        },\n        \"gather_kernel_buffer_threshold\": 0,\n        \"eager_batching\": false\n    },\n    \"sequence_batching\": {\n        \"oldest\": {\n            \"max_candidate_sequences\": 1,\n            \"preferred_batch_size\": [\n                1\n            ],\n            \"max_queue_delay_microseconds\": 1000,\n            \"preserve_ordering\": false\n        },\n        \"max_sequence_idle_microseconds\": 60000000,\n        \"control_input\": [\n            {\n                \"name\": \"START\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_START\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"READY\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_READY\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"END\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_END\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"CORRID\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_CORRID\",\n                        \"int32_false_true\": [],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_UINT64\"\n                    }\n                ]\n            }\n        ],\n        \"state\": [],\n        \"iterative_sequence\": false\n    },\n    \"instance_group\": [\n        {\n            \"name\": \"spectrogram_chunker-English-US_0\",\n            \"kind\": \"KIND_GPU\",\n            \"count\": 1,\n            \"gpus\": [\n                0\n            ],\n            \"secondary_devices\": [],\n            \"profile\": [],\n            \"passive\": false,\n            \"host_policy\": \"\"\n        }\n    ],\n    \"default_model_filename\": \"\",\n    \"cc_model_filenames\": {},\n    \"metric_tags\": {},\n    \"parameters\": {\n        \"supports_volume\": {\n            \"string_value\": \"True\"\n        },\n        \"num_mels\": {\n            \"string_value\": \"80\"\n        },\n        \"num_samples_per_frame\": {\n            \"string_value\": \"512\"\n        },\n        \"chunk_length\": {\n            \"string_value\": \"80\"\n        },\n        \"max_execution_batch_size\": {\n            \"string_value\": \"1\"\n        }\n    },\n    \"model_warmup\": [],\n    \"model_transaction_policy\": {\n        \"decoupled\": true\n    }\n}"
I0205 10:16:02.561698 21 spectrogram-chunker.cc:276] "TRITONBACKEND_ModelInstanceInitialize: spectrogram_chunker-English-US_0_0 (device 0)"
I0205 10:16:02.563394 21 model_lifecycle.cc:839] "successfully loaded 'spectrogram_chunker-English-US'"
I0205 10:16:02.569844 21 tts-postprocessor.cc:308] "TRITONBACKEND_ModelInitialize: tts_postprocessor-English-US (version 1)"
I0205 10:16:02.571551 21 backend_model.cc:303] "model configuration:\n{\n    \"name\": \"tts_postprocessor-English-US\",\n    \"platform\": \"\",\n    \"backend\": \"riva_tts_postprocessor\",\n    \"runtime\": \"\",\n    \"version_policy\": {\n        \"latest\": {\n            \"num_versions\": 1\n        }\n    },\n    \"max_batch_size\": 1,\n    \"input\": [\n        {\n            \"name\": \"INPUT\",\n            \"data_type\": \"TYPE_FP32\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1,\n                -1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"NUM_VALID_SAMPLES\",\n            \"data_type\": \"TYPE_INT32\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"Prosody_volume\",\n            \"data_type\": \"TYPE_FP32\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                -1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        }\n    ],\n    \"output\": [\n        {\n            \"name\": \"OUTPUT\",\n            \"data_type\": \"TYPE_FP32\",\n            \"dims\": [\n                -1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        }\n    ],\n    \"batch_input\": [],\n    \"batch_output\": [],\n    \"optimization\": {\n        \"priority\": \"PRIORITY_DEFAULT\",\n        \"input_pinned_memory\": {\n            \"enable\": true\n        },\n        \"output_pinned_memory\": {\n            \"enable\": true\n        },\n        \"gather_kernel_buffer_threshold\": 0,\n        \"eager_batching\": false\n    },\n    \"sequence_batching\": {\n        \"oldest\": {\n            \"max_candidate_sequences\": 1,\n            \"preferred_batch_size\": [\n                1\n            ],\n            \"max_queue_delay_microseconds\": 100,\n            \"preserve_ordering\": false\n        },\n        \"max_sequence_idle_microseconds\": 60000000,\n        \"control_input\": [\n            {\n                \"name\": \"START\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_START\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"READY\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_READY\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"END\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_END\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"CORRID\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_CORRID\",\n                        \"int32_false_true\": [],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_UINT64\"\n                    }\n                ]\n            }\n        ],\n        \"state\": [],\n        \"iterative_sequence\": false\n    },\n    \"instance_group\": [\n        {\n            \"name\": \"tts_postprocessor-English-US_0\",\n            \"kind\": \"KIND_GPU\",\n            \"count\": 1,\n            \"gpus\": [\n                0\n            ],\n            \"secondary_devices\": [],\n            \"profile\": [],\n            \"passive\": false,\n            \"host_policy\": \"\"\n        }\n    ],\n    \"default_model_filename\": \"\",\n    \"cc_model_filenames\": {},\n    \"metric_tags\": {},\n    \"parameters\": {\n        \"hop_length\": {\n            \"string_value\": \"256\"\n        },\n        \"filter_length\": {\n            \"string_value\": \"1024\"\n        },\n        \"supports_volume\": {\n            \"string_value\": \"True\"\n        },\n        \"num_samples_per_frame\": {\n            \"string_value\": \"512\"\n        },\n        \"chunk_num_samples\": {\n            \"string_value\": \"40960\"\n        },\n        \"fade_length\": {\n            \"string_value\": \"256\"\n        },\n        \"use_denoiser\": {\n            \"string_value\": \"False\"\n        },\n        \"max_execution_batch_size\": {\n            \"string_value\": \"1\"\n        },\n        \"max_chunk_size\": {\n            \"string_value\": \"131072\"\n        }\n    },\n    \"model_warmup\": [],\n    \"model_transaction_policy\": {\n        \"decoupled\": false\n    }\n}"
I0205 10:16:02.574073 21 tts-postprocessor.cc:310] "TRITONBACKEND_ModelInstanceInitialize: tts_postprocessor-English-US_0_0 (device 0)"
I0205 10:16:02.588258 21 model_lifecycle.cc:839] "successfully loaded 'tts_postprocessor-English-US'"
I0205 10:16:02.656494 21 pipeline_library.cc:22] "TRITONBACKEND_ModelInitialize: tts_preprocessor-English-US (version 1)"
I0205 10:16:02.667538 21 backend_model.cc:303] "model configuration:\n{\n    \"name\": \"tts_preprocessor-English-US\",\n    \"platform\": \"\",\n    \"backend\": \"riva_tts_pipeline\",\n    \"runtime\": \"\",\n    \"version_policy\": {\n        \"latest\": {\n            \"num_versions\": 1\n        }\n    },\n    \"max_batch_size\": 1,\n    \"input\": [\n        {\n            \"name\": \"input_string\",\n            \"data_type\": \"TYPE_STRING\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"user_dictionary\",\n            \"data_type\": \"TYPE_STRING\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"speaker\",\n            \"data_type\": \"TYPE_INT64\",\n            \"format\": \"FORMAT_NONE\",\n            \"dims\": [\n                1\n            ],\n            \"is_shape_tensor\": false,\n            \"allow_ragged_batch\": false,\n            \"optional\": false,\n            \"is_non_linear_format_io\": false\n        }\n    ],\n    \"output\": [\n        {\n            \"name\": \"output\",\n            \"data_type\": \"TYPE_INT64\",\n            \"dims\": [\n                -1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"output_mask\",\n            \"data_type\": \"TYPE_FP32\",\n            \"dims\": [\n                1,\n                400,\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"output_length\",\n            \"data_type\": \"TYPE_INT32\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"is_last_sentence\",\n            \"data_type\": \"TYPE_INT32\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"output_string\",\n            \"data_type\": \"TYPE_STRING\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"sentence_num\",\n            \"data_type\": \"TYPE_INT32\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"pitch\",\n            \"data_type\": \"TYPE_FP32\",\n            \"dims\": [\n                -1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"duration\",\n            \"data_type\": \"TYPE_FP32\",\n            \"dims\": [\n                -1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"volume\",\n            \"data_type\": \"TYPE_FP32\",\n            \"dims\": [\n                -1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        },\n        {\n            \"name\": \"speaker\",\n            \"data_type\": \"TYPE_INT64\",\n            \"dims\": [\n                1\n            ],\n            \"label_filename\": \"\",\n            \"is_shape_tensor\": false,\n            \"is_non_linear_format_io\": false\n        }\n    ],\n    \"batch_input\": [],\n    \"batch_output\": [],\n    \"optimization\": {\n        \"graph\": {\n            \"level\": 0\n        },\n        \"priority\": \"PRIORITY_DEFAULT\",\n        \"cuda\": {\n            \"graphs\": false,\n            \"busy_wait_events\": false,\n            \"graph_spec\": [],\n            \"output_copy_stream\": true\n        },\n        \"input_pinned_memory\": {\n            \"enable\": true\n        },\n        \"output_pinned_memory\": {\n            \"enable\": true\n        },\n        \"gather_kernel_buffer_threshold\": 0,\n        \"eager_batching\": false\n    },\n    \"sequence_batching\": {\n        \"oldest\": {\n            \"max_candidate_sequences\": 1,\n            \"preferred_batch_size\": [\n                1\n            ],\n            \"max_queue_delay_microseconds\": 100,\n            \"preserve_ordering\": false\n        },\n        \"max_sequence_idle_microseconds\": 60000000,\n        \"control_input\": [\n            {\n                \"name\": \"START\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_START\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"READY\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_READY\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"END\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_END\",\n                        \"int32_false_true\": [\n                            0,\n                            1\n                        ],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_INVALID\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"CORRID\",\n                \"control\": [\n                    {\n                        \"kind\": \"CONTROL_SEQUENCE_CORRID\",\n                        \"int32_false_true\": [],\n                        \"fp32_false_true\": [],\n                        \"bool_false_true\": [],\n                        \"data_type\": \"TYPE_UINT64\"\n                    }\n                ]\n            }\n        ],\n        \"state\": [],\n        \"iterative_sequence\": false\n    },\n    \"instance_group\": [\n        {\n            \"name\": \"tts_preprocessor-English-US_0\",\n            \"kind\": \"KIND_GPU\",\n            \"count\": 1,\n            \"gpus\": [\n                0\n            ],\n            \"secondary_devices\": [],\n            \"profile\": [],\n            \"passive\": false,\n            \"host_policy\": \"\"\n        }\n    ],\n    \"default_model_filename\": \"\",\n    \"cc_model_filenames\": {},\n    \"metric_tags\": {},\n    \"parameters\": {\n        \"supports_speaker_mixing\": {\n            \"string_value\": \"False\"\n        },\n        \"normalize_pitch\": {\n            \"string_value\": \"True\"\n        },\n        \"pad_with_space\": {\n            \"string_value\": \"True\"\n        },\n        \"n_fft\": {\n            \"string_value\": \"1024\"\n        },\n        \"zero_shot_sample_rate\": {\n            \"string_value\": \"0\"\n        },\n        \"upper_case_chars\": {\n            \"string_value\": \"True\"\n        },\n        \"enable_emphasis_tag\": {\n            \"string_value\": \"True\"\n        },\n        \"subvoices\": {\n            \"string_value\": \"Female-1:0,Male-1:1,Female-Neutral:2,Male-Neutral:3,Female-Angry:4,Male-Angry:5,Female-Calm:6,Male-Calm:7,Female-Fearful:10,Female-Happy:12,Male-Happy:13,Female-Sad:14\"\n        },\n        \"start_of_emphasis_token\": {\n            \"string_value\": \"[\"\n        },\n        \"is_pflow\": {\n            \"string_value\": \"False\"\n        },\n        \"subvoices_dir\": {\n            \"string_value\": \".\"\n        },\n        \"max_batch_size\": {\n            \"string_value\": \"1\"\n        },\n        \"dictionary_path\": {\n            \"string_value\": \"ipa_cmudict_single_pron-0.82_nv24.08.txt\"\n        },\n        \"end_of_emphasis_token\": {\n            \"string_value\": \"]\"\n        },\n        \"max_sequence_length\": {\n            \"string_value\": \"400\"\n        },\n        \"hop_size\": {\n            \"string_value\": \"256\"\n        },\n        \"win_size\": {\n            \"string_value\": \"1024\"\n        },\n        \"language\": {\n            \"string_value\": \"en-US\"\n        },\n        \"upper_case_g2p\": {\n            \"string_value\": \"True\"\n        },\n        \"supports_ragged_batches\": {\n            \"string_value\": \"True\"\n        },\n        \"mapping_path\": {\n            \"string_value\": \"mapping.txt\"\n        },\n        \"decoupled_mode\": {\n            \"string_value\": \"True\"\n        },\n        \"max_input_length\": {\n            \"string_value\": \"2000\"\n        },\n        \"uses_int32\": {\n            \"string_value\": \"False\"\n        },\n        \"num_mels\": {\n            \"string_value\": \"80\"\n        },\n        \"center\": {\n            \"string_value\": \"False\"\n        },\n        \"expected_outputs\": {\n            \"string_value\": \"output,output_mask,output_length,is_last_sentence,output_string,sentence_num,pitch,duration,volume,speaker\"\n        },\n        \"abbreviations_path\": {\n            \"string_value\": \"abbr.txt\"\n        },\n        \"pipeline_type\": {\n            \"string_value\": \"preprocessor\"\n        },\n        \"phone_set\": {\n            \"string_value\": \"ipa\"\n        },\n        \"pitch_std\": {\n            \"string_value\": \"68.77673200611284\"\n        },\n        \"norm_proto_path\": {\n            \"string_value\": \".\"\n        },\n        \"g2p_ignore_ambiguous\": {\n            \"string_value\": \"True\"\n        },\n        \"batch_mode\": {\n            \"string_value\": \"False\"\n        }\n    },\n    \"model_warmup\": [],\n    \"model_transaction_policy\": {\n        \"decoupled\": true\n    }\n}"
I0205 10:16:02.667903 21 pipeline_library.cc:26] "TRITONBACKEND_ModelInstanceInitialize: tts_preprocessor-English-US_0_0 (device 0)"
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0205 10:16:02.669412    42 preprocessor.cc:244] TTS character mapping loaded from /data/models/tts_preprocessor-English-US/1/mapping.txt
  > Riva waiting for Triton server to load all models...retrying in 1 second
I0205 10:16:02.834165 21 logging.cc:46] "Loaded engine size: 209 MiB"
W0205 10:16:02.834390 21 logging.cc:43] "Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors."
2025-02-05 10:16:02.912939164 [W:onnxruntime:, session_state.cc:1162 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2025-02-05 10:16:02.913357638 [W:onnxruntime:, session_state.cc:1164 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
I0205 10:16:03.421285    42 preprocessor.cc:287] TTS phonetic mapping loaded from /data/models/tts_preprocessor-English-US/1/ipa_cmudict_single_pron-0.82_nv24.08.txt
I0205 10:16:03.421736    42 preprocessor.cc:300] Abbreviation mapping loaded from /data/models/tts_preprocessor-English-US/1/abbr.txt
I0205 10:16:03.421937    42 normalize.cc:52] Speech Class far file missing:/data/models/tts_preprocessor-English-US/1/./speech_class.far
I0205 10:16:03.421968    42 normalize.cc:56] Post Process far file missing:/data/models/tts_preprocessor-English-US/1/./post_process.far
I0205 10:16:03.421985    42 normalize.cc:60] Pre Process far file missing:/data/models/tts_preprocessor-English-US/1/./pre_process.far
I0205 10:16:03.422273    42 normalizer.cc:66] Proto String: tokenizer_grammar:  "tokenizer.ascii_proto"
verbalizer_grammar:  "verbalizer.ascii_proto"
sentence_boundary_regexp: "[\\.:!\\?] "
sentence_boundary_exceptions_file: "sentence_boundary_exceptions.txt"
I0205 10:16:03.741950    42 abstract-grm-manager.h:168] Updating FST 0xfff,e63,3f9,a80 with input label sorted version.
E0205 10:16:03.815973 21 logging.cc:40] "ICudaEngine::createExecutionContext: Error Code 1: Myelin ([cask_fusion.cpp:initialize_cask_fusion:381] Cask unknown internal error)"
I0205 10:16:03.816598 21 tensorrt.cc:353] "TRITONBACKEND_ModelInstanceFinalize: delete instance state"
E0205 10:16:03.816734 21 backend_model.cc:692] "ERROR: Failed to create instance: unable to create TensorRT context: ICudaEngine::createExecutionContext: Error Code 1: Myelin ([cask_fusion.cpp:initialize_cask_fusion:381] Cask unknown internal error)"
I0205 10:16:03.816775 21 tensorrt.cc:274] "TRITONBACKEND_ModelFinalize: delete model state"
E0205 10:16:03.816996 21 model_lifecycle.cc:642] "failed to load 'riva-trt-riva-punctuation-en-US-nn-bert-base-uncased' version 1: Internal: unable to create TensorRT context: ICudaEngine::createExecutionContext: Error Code 1: Myelin ([cask_fusion.cpp:initialize_cask_fusion:381] Cask unknown internal error)"
I0205 10:16:03.817250 21 model_lifecycle.cc:777] "failed to load 'riva-trt-riva-punctuation-en-US-nn-bert-base-uncased'"
I0205 10:16:03.825642 21 model_lifecycle.cc:839] "successfully loaded 'riva-onnx-fastpitch_encoder-English-US'"
  > Riva waiting for Triton server to load all models...retrying in 1 second
I0205 10:16:03.848896    42 abstract-grm-manager.h:168] Updating FST 0xfff,e63,559,810 with input label sorted version.
I0205 10:16:03.853106    42 abstract-grm-manager.h:168] Updating FST 0xfff,dc8,a8c,950 with input label sorted version.
I0205 10:16:03.854973 21 model_lifecycle.cc:839] "successfully loaded 'tts_preprocessor-English-US'"
  > Riva waiting for Triton server to load all models...retrying in 1 second
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0205 10:16:05.008001    39 asr_ensemble_factory.cc:259] Loading acoustic model
I0205 10:16:05.009349    39 asr_ensemble_factory.cc:265] Done loading acoustic model
I0205 10:16:05.016336    39 normalize.cc:56] Post Process far file missing:/data/models/conformer-en-US-asr-streaming-asr-bls-ensemble/1/./post_process.far
I0205 10:16:05.016415    39 normalize.cc:60] Pre Process far file missing:/data/models/conformer-en-US-asr-streaming-asr-bls-ensemble/1/./pre_process.far
I0205 10:16:05.063481    39 normalizer.cc:66] Proto String: tokenizer_grammar:  "tokenizer.ascii_proto"
verbalizer_grammar:  "verbalizer.ascii_proto"
sentence_boundary_regexp: "[\\.:!\\?] "
sentence_boundary_exceptions_file: "sentence_boundary_exceptions.txt"
I0205 10:16:05.153332 21 model_lifecycle.cc:839] "successfully loaded 'conformer-en-US-asr-streaming-asr-bls-ensemble'"
I0205 10:16:05.156177 21 model_lifecycle.cc:472] "loading: fastpitch_hifigan_ensemble-English-US:1"
I0205 10:16:05.157352 21 model_lifecycle.cc:839] "successfully loaded 'fastpitch_hifigan_ensemble-English-US'"
I0205 10:16:05.158552 21 server.cc:604] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0205 10:16:05.158697 21 server.cc:631] 
+----------------------------+-----------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Backend                    | Path                                                                                          | Config                                                                                                                                                         |
+----------------------------+-----------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tensorrt                   | /opt/tritonserver/backends/tensorrt/libtriton_tensorrt.so                                     | {"cmdline":{"auto-complete-config":"false","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"5.300000","default-max-batch-size":"4"}} |
| onnxruntime                | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so                               | {"cmdline":{"auto-complete-config":"false","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"5.300000","default-max-batch-size":"4"}} |
| riva_nlp_pipeline          | /opt/tritonserver/backends/riva_nlp_pipeline/libtriton_riva_nlp_pipeline.so                   | {"cmdline":{"auto-complete-config":"false","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"5.300000","default-max-batch-size":"4"}} |
| riva_asr_ensemble_pipeline | /opt/tritonserver/backends/riva_asr_ensemble_pipeline/libtriton_riva_asr_ensemble_pipeline.so | {"cmdline":{"auto-complete-config":"false","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"5.300000","default-max-batch-size":"4"}} |
| riva_tts_chunker           | /opt/tritonserver/backends/riva_tts_chunker/libtriton_riva_tts_chunker.so                     | {"cmdline":{"auto-complete-config":"false","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"5.300000","default-max-batch-size":"4"}} |
| riva_tts_postprocessor     | /opt/tritonserver/backends/riva_tts_postprocessor/libtriton_riva_tts_postprocessor.so         | {"cmdline":{"auto-complete-config":"false","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"5.300000","default-max-batch-size":"4"}} |
| riva_tts_pipeline          | /opt/tritonserver/backends/riva_tts_pipeline/libtriton_riva_tts_pipeline.so                   | {"cmdline":{"auto-complete-config":"false","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"5.300000","default-max-batch-size":"4"}} |
+----------------------------+-----------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+

I0205 10:16:05.159048 21 server.cc:674] 
+------------------------------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Model                                                | Version | Status                                                                                                                                                                                         |
+------------------------------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| conformer-en-US-asr-streaming-asr-bls-ensemble       | 1       | READY                                                                                                                                                                                          |
| fastpitch_hifigan_ensemble-English-US                | 1       | READY                                                                                                                                                                                          |
| riva-onnx-fastpitch_encoder-English-US               | 1       | READY                                                                                                                                                                                          |
| riva-punctuation-en-US                               | 1       | READY                                                                                                                                                                                          |
| riva-trt-conformer-en-US-asr-streaming-am-streaming  | 1       | READY                                                                                                                                                                                          |
| riva-trt-hifigan-English-US                          | 1       | READY                                                                                                                                                                                          |
| riva-trt-riva-punctuation-en-US-nn-bert-base-uncased | 1       | UNAVAILABLE: Internal: unable to create TensorRT context: ICudaEngine::createExecutionContext: Error Code 1: Myelin ([cask_fusion.cpp:initialize_cask_fusion:381] Cask unknown internal error) |
| spectrogram_chunker-English-US                       | 1       | READY                                                                                                                                                                                          |
| tts_postprocessor-English-US                         | 1       | READY                                                                                                                                                                                          |
| tts_preprocessor-English-US                          | 1       | READY                                                                                                                                                                                          |
+------------------------------------------------------+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

I0205 10:16:05.160446 21 tritonserver.cc:2598] 
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                                                                                                                           |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                                                                                                                          |
| server_version                   | 2.50.0                                                                                                                                                                                                          |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logging |
| model_repository_path[0]         | /data/models                                                                                                                                                                                                    |
| model_control_mode               | MODE_NONE                                                                                                                                                                                                       |
| strict_model_config              | 1                                                                                                                                                                                                               |
| model_config_name                |                                                                                                                                                                                                                 |
| rate_limit                       | OFF                                                                                                                                                                                                             |
| pinned_memory_pool_byte_size     | 268435456                                                                                                                                                                                                       |
| cuda_memory_pool_byte_size{0}    | 1000000000                                                                                                                                                                                                      |
| min_supported_compute_capability | 5.3                                                                                                                                                                                                             |
| strict_readiness                 | 1                                                                                                                                                                                                               |
| exit_timeout                     | 30                                                                                                                                                                                                              |
| cache_enabled                    | 0                                                                                                                                                                                                               |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

I0205 10:16:05.160588 21 server.cc:305] "Waiting for in-flight requests to complete."
I0205 10:16:05.160620 21 server.cc:321] "Timeout 30: Found 0 model versions that have in-flight inferences"
I0205 10:16:05.162858 21 model_lifecycle.cc:624] "successfully unloaded 'fastpitch_hifigan_ensemble-English-US' version 1"
I0205 10:16:05.162889 21 tensorrt.cc:353] "TRITONBACKEND_ModelInstanceFinalize: delete instance state"
I0205 10:16:05.163081 21 onnxruntime.cc:2946] "TRITONBACKEND_ModelInstanceFinalize: delete instance state"
I0205 10:16:05.163217 21 tensorrt.cc:353] "TRITONBACKEND_ModelInstanceFinalize: delete instance state"
I0205 10:16:05.165703 21 pipeline_library.cc:31] "TRITONBACKEND_ModelInstanceFinalize: delete instance state"
I0205 10:16:05.165360 21 tts-postprocessor.cc:313] "TRITONBACKEND_ModelInstanceFinalize: delete instance state"
I0205 10:16:05.170421 21 pipeline_library.cc:29] "TRITONBACKEND_ModelInstanceFinalize: delete instance state"
I0205 10:16:05.165963 21 server.cc:336] "All models are stopped, unloading models"
I0205 10:16:05.170524 21 server.cc:345] "Timeout 30: Found 8 live models and 0 in-flight non-inference requests"
I0205 10:16:05.170603 21 spectrogram-chunker.cc:279] "TRITONBACKEND_ModelInstanceFinalize: delete instance state"
I0205 10:16:05.170751 21 spectrogram-chunker.cc:275] "TRITONBACKEND_ModelFinalize: delete model state"
I0205 10:16:05.171063 21 model_lifecycle.cc:624] "successfully unloaded 'spectrogram_chunker-English-US' version 1"
I0205 10:16:05.171528 21 pipeline_library.cc:30] "TRITONBACKEND_ModelInstanceFinalize: delete instance state"
I0205 10:16:05.192425 21 pipeline_library.cc:27] "TRITONBACKEND_ModelFinalize: delete model state"
I0205 10:16:05.192809 21 model_lifecycle.cc:624] "successfully unloaded 'riva-punctuation-en-US' version 1"
I0205 10:16:05.206838 21 tts-postprocessor.cc:309] "TRITONBACKEND_ModelFinalize: delete model state"
I0205 10:16:05.227682 21 model_lifecycle.cc:624] "successfully unloaded 'tts_postprocessor-English-US' version 1"
I0205 10:16:05.229947 21 tensorrt.cc:274] "TRITONBACKEND_ModelFinalize: delete model state"
I0205 10:16:05.258263 21 pipeline_library.cc:25] "TRITONBACKEND_ModelFinalize: delete model state"
I0205 10:16:05.260529 21 model_lifecycle.cc:624] "successfully unloaded 'tts_preprocessor-English-US' version 1"
I0205 10:16:05.283593 21 model_lifecycle.cc:624] "successfully unloaded 'riva-trt-hifigan-English-US' version 1"
I0205 10:16:05.293984 21 onnxruntime.cc:2870] "TRITONBACKEND_ModelFinalize: delete model state"
I0205 10:16:05.294735 21 model_lifecycle.cc:624] "successfully unloaded 'riva-onnx-fastpitch_encoder-English-US' version 1"
I0205 10:16:05.401154 21 tensorrt.cc:274] "TRITONBACKEND_ModelFinalize: delete model state"
I0205 10:16:05.412964 21 model_lifecycle.cc:624] "successfully unloaded 'riva-trt-conformer-en-US-asr-streaming-am-streaming' version 1"
I0205 10:16:05.567293 21 pipeline_library.cc:25] "TRITONBACKEND_ModelFinalize: delete model state"
I0205 10:16:05.567796 21 model_lifecycle.cc:624] "successfully unloaded 'conformer-en-US-asr-streaming-asr-bls-ensemble' version 1"
  > Riva waiting for Triton server to load all models...retrying in 1 second
I0205 10:16:06.170683 21 server.cc:345] "Timeout 29: Found 0 live models and 0 in-flight non-inference requests"
error: creating server: Internal - failed to load all models
  > Riva waiting for Triton server to load all models...retrying in 1 second
  > Triton server died before reaching ready state. Terminating Riva startup.
Check Triton logs with: docker logs 
/opt/riva/bin/start-riva: line 1: kill: (21) - No such process

@flindt thank you for reporting this issue. The team was able to reproduce the error - it’s related to Jetpack software.
Nothing is needed from your side at this point.
In future cases, you can also contact support and report a bug - it will help with the traceability of the fix.

ok - thanks for replying.

Im relatively new on this - which is why i had no idea that this was a bug - because i have tried a lot of different setups - where many of those did not work.

So - i tried to build a jetbot myself - and to follow steps here - on Jetpack 6.0 - to get it runnning:

But with no luck - and with very little response.

Then i tried doing both these setups from Jen Hung Ho - but i did it in 6.2:

again with no succcess - maybe because i could not install Riva

So - can i get any help on this?

As i also asked - is the Riva - not setup to run on 6.2? - i can understand that there is a bug - so for now it does not run on 6.2 - so for now I need to go to 6.0 - to run riva - and then also running the the two Jen-Hung-Ho containers… ?

Thanks again for replying.

Hello, was the initial issue resolved? - {Being unable to install the Riva - when running jetpack 6.2.} Will I successfully run the Riva on my Jetson orin nano super running the latest JetPack 6.2?
Thanks

No as fra as I understand - there was no solution for this.
Im now running it all back to JEtpack 6.0… and i will see what/if riva can then run ok…

2 Likes

Up on this!
I have exactly the same issue.
It is pitty having updated Orin to JetPack 6.2 and not being able to run most of speech and video models…
Can Nvidia please comment on this, ideally with support schedule, or at least teach us the reasons of these bugs?

1 Like

@servak11 @amargolin Is the determination here that Riva doesn’t work on Jetpack 6.2? How about 6.1? Or 6.0?

Hi @sappington.wesley - looks like Jetpack 6.0 is what you need Support Matrix — NVIDIA Riva.

I’m getting the same issue on Jetpack 6.0. Is there an alternative installation method I could try? Or a way to get logs inspite of the docker command not working?

any update on this ? I am facing the same error on riva-start on jetpack 6.0 L4T 36.3.0
error from docker logs riva-speech =>

“TRITONBACKEND_ModelInstanceInitialize: tts_preprocessor-English-US_0_0 (device 0)”
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0316 15:13:34.464315 38 preprocessor.cc:244] TTS character mapping loaded from /data/models/tts_preprocessor-English-US/1/mapping.txt

Riva waiting for Triton server to load all models…retrying in 1 second
E0316 15:13:35.219819 21 logging.cc:40] “ICudaEngine::createExecutionContext: Error Code 1: Myelin ([cask_fusion.cpp:initialize_cask_fusion:381] Cask unknown internal error)”
I0316 15:13:35.221454 21 tensorrt.cc:353] “TRITONBACKEND_ModelInstanceFinalize: delete instance state”
E0316 15:13:35.222126 21 backend_model.cc:692] “ERROR: Failed to create instance: unable to create TensorRT context: ICudaEngine::createExecutionContext: Error Code 1: Myelin ([cask_fusion.cpp:initialize_cask_fusion:381] Cask unknown internal error)”
I0316 15:13:35.222703 21 tensorrt.cc:274] “TRITONBACKEND_ModelFinalize: delete model state”
E0316 15:13:35.223611 21 model_lifecycle.cc:642] “failed to load ‘riva-trt-riva-punctuation-en-US-nn-bert-base-uncased’ version 1: Internal: unable to create TensorRT context: ICudaEngine::createExecutionContext: Error Code 1: Myelin ([cask_fusion.cpp:initialize_cask_fusion:381] Cask unknown internal error)”
I0316 15:13:35.224240 21 model_lifecycle.cc:777] “failed to load ‘riva-trt-riva-punctuation-en-US-nn-bert-base-uncased’”
I0316 15:13:35.750476 38 preprocessor.cc:287] TTS phonetic mapping loaded from /data/models/tts_preprocessor-English-US/1/ipa_cmudict_single_pron-0.82_nv24.08.txt
I0316 15:13:35.767104 38 preprocessor.cc:300] Abbreviation mapping loaded from /data/models/tts_preprocessor-English-US/1/abbr.txt
I0316 15:13:35.767253 38 normalize.cc:52] Speech Class far file missing:/data/models/tts_preprocessor-English-US/1/./speech_class.far
I0316 15:13:35.767287 38 normalize.cc:56] Post Process far file missing:/data/models/tts_preprocessor-English-US/1/./post_process.far
I0316 15:13:35.767310 38 normalize.cc:60] Pre Process far file missing:/data/models/tts_preprocessor-English-US/1/./pre_process.far
I0316 15:13:35.767329 38 normalizer.cc:66] Proto String: tokenizer_grammar: “tokenizer.ascii_proto”

Hey @anand.kr,

We’re not able to recreate this error internally.

Wanted to check - did you follow these instructions for installing jetpack 6.0? Install Jetson Software with SDK Manager — SDK Manager. That comes with all the pre-requisites needed for Riva installation.

Riva Quick Start install here - Quick Start Guide — NVIDIA Riva.

Best,

Sophie

Hi

Thanks for your reply :)
I installed the Jetpack 6.0 on SD card and then migrated to SSD using this tutorial:

( reason for not using is the SDK Manager did not work for me at all, it kept complaining that it cannot detect the Jetson orin during setup even though it did detect on SDK install step 1) It was not an easy experience to get onto an SSD via SDK Manager hence followed the disk copy as suggested by Jetson hacks

However ultimately I can confirm that jtop shows proper jetpack version as Jetpack 6.0 and I am running jetson on SSD:

I am able to run olama, llama3.2 on jetson-containers without issues. Its only the riva setup which has the issue as reported earlier

@anand.kr you’re likely missing dependencies because of not installing through the SDK manager.

Do you mind starting a topic on the Jetson forum Jetson & Embedded Systems - NVIDIA Developer Forums about the issues installing with the SDK manager? Thanks, Sophie.