Jetson orin nano 4G riva fail

etson Orin Nano 4GB swap 20G jetpack 5.1.2 riva 2.14.0

Could you please help me check if 4GB orin nano is not supported? The desktop is partially occupied, and the actual available RAM is 1.9G+20G swap

service_enabled_asr=false
service_enabled_nlp=false
service_enabled_tts=true
service_enabled_nmt=false

bash riva_start.sh

Starting Riva Speech Services. This may take several minutes depending on the number of models deployed.
456
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 seconds
Waiting for Riva server to load all models…retrying in 10 second

docker logs -f riva-speech
/opt/riva/bin/start-riva: line 10: curl: command not found
/opt/riva/bin/start-riva: line 11: [: -ne: unary operator expected
Triton server is ready…
I1226 01:57:06.602414 23 riva_server.cc:126] Using Insecure Server Credentials
E1226 01:57:06.615759 23 model_registry.cc:288] error: unable to get server status: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:8001: Failed to connect to remote host: Connection refused
I1226 01:57:06.767200 21 pinned_memory_manager.cc:240] Pinned memory pool is created at ‘0x20296e000’ with size 268435456
I1226 01:57:06.768096 21 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 1000000000
I1226 01:57:07.081240 21 model_lifecycle.cc:459] loading: riva-onnx-fastpitch_encoder-English-US:1
I1226 01:57:07.081351 21 model_lifecycle.cc:459] loading: riva-trt-hifigan-English-US:1
I1226 01:57:07.081413 21 model_lifecycle.cc:459] loading: spectrogram_chunker-English-US:1
I1226 01:57:07.081479 21 model_lifecycle.cc:459] loading: tts_postprocessor-English-US:1
I1226 01:57:07.081715 21 model_lifecycle.cc:459] loading: tts_preprocessor-English-US:1
I1226 01:57:07.096441 21 onnxruntime.cc:2459] TRITONBACKEND_Initialize: onnxruntime
I1226 01:57:07.096482 21 onnxruntime.cc:2469] Triton TRITONBACKEND API version: 1.10
I1226 01:57:07.096492 21 onnxruntime.cc:2475] ‘onnxruntime’ TRITONBACKEND API version: 1.10
I1226 01:57:07.096499 21 onnxruntime.cc:2505] backend configuration:
{“cmdline”:{“auto-complete-config”:“false”,“min-compute-capability”:“5.300000”,“backend-directory”:“/opt/tritonserver/backends”,“default-max-batch-size”:“4”}}
I1226 01:57:07.332457 21 tensorrt.cc:5444] TRITONBACKEND_Initialize: tensorrt
I1226 01:57:07.333834 21 tensorrt.cc:5454] Triton TRITONBACKEND API version: 1.10
I1226 01:57:07.333867 21 tensorrt.cc:5460] ‘tensorrt’ TRITONBACKEND API version: 1.10
I1226 01:57:07.333882 21 tensorrt.cc:5488] backend configuration:
{“cmdline”:{“auto-complete-config”:“false”,“min-compute-capability”:“5.300000”,“backend-directory”:“/opt/tritonserver/backends”,“default-max-batch-size”:“4”}}
I1226 01:57:07.335250 21 onnxruntime.cc:2563] TRITONBACKEND_ModelInitialize: riva-onnx-fastpitch_encoder-English-US (version 1)
I1226 01:57:07.339803 21 tensorrt.cc:5578] TRITONBACKEND_ModelInitialize: riva-trt-hifigan-English-US (version 1)
I1226 01:57:07.341346 21 backend_model.cc:188] Overriding execution policy to “TRITONBACKEND_EXECUTION_BLOCKING” for sequence model “riva-trt-hifigan-English-US”
I1226 01:57:08.301489 21 onnxruntime.cc:2606] TRITONBACKEND_ModelInstanceInitialize: riva-onnx-fastpitch_encoder-English-US_0 (GPU device 0)
E1226 01:57:16.621676 23 model_registry.cc:288] error: unable to get server status: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:8001: Failed to connect to remote host: Connection refused
E1226 01:57:26.634523 23 model_registry.cc:288] error: unable to get server status: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:8001: Failed to connect to remote host: Connection refused
E1226 01:57:36.636152 23 model_registry.cc:288] error: unable to get server status: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:8001: Failed to connect to remote host: Connection refused
E1226 01:57:46.640413 23 model_registry.cc:288] error: unable to get server status: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:8001: Failed to connect to remote host: Connection refused
E1226 01:57:56.641882 23 model_registry.cc:288] error: unable to get server status: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:8001: Failed to connect to remote host: Connection refused
E1226 01:58:06.642362 23 model_registry.cc:288] error: unable to get server status: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:8001: Failed to connect to remote host: Connection refused
2023-12-26 01:58:13.971932581 [W:onnxruntime:log, cuda_provider_factory.cc:227 CreateExecutionProviderFactory] CUDA took 65 seconds to start, please see this issue for how to fix it: When using CUDA the first run is very slow · Issue #10746 · microsoft/onnxruntime · GitHub
2023-12-26 01:58:15.530990019 [W:onnxruntime:, session_state.cc:1030 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-12-26 01:58:15.532057032 [W:onnxruntime:, session_state.cc:1032 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
E1226 01:58:16.658967 23 model_registry.cc:288] error: unable to get server status: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:8001: Failed to connect to remote host: Connection refused
E1226 01:58:26.754384 23 model_registry.cc:288] error: unable to get server status: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:8001: Failed to connect to remote host: Connection refused
E1226 01:58:36.766956 23 model_registry.cc:288] error: unable to get server status: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:8001: Failed to connect to remote host: Connection refused
/opt/riva/bin/start-riva: line 55: 21 Killed ${CUSTOM_TRITON_ENV} tritonserver --log-verbose=0 --disable-auto-complete-config $model_repos --cuda-memory-pool-byte-size=0:1000000000
One of the processes has exited unexpectedly. Stopping container.
W1226 01:58:39.698544 23 riva_server.cc:196] Signal: 15
Why did 8001 not start in Docker?

Hi @752636060

Thanks for your interest in Riva

Riva Supports Jetson Orin, So no issues with that
Your Device not able to connect to server, hence server is not started

Check output of docker info | grep -i runtime and share with us, it should have nvidia runtime

Thanks