Triton Image for jetson nano

Hi,
I am trying to set up Triton on jetson nano…
I want help to start the triton server and load the models.

Thank you!

Could you elaborate more about your question?

tritonserver --model-repository=qa/custom_models/model_repo --backend-directory=backends --backend-config=tensorflow,version=1
2022-05-06 13:36:22.748446: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
I0506 08:06:25.490642 19218 tensorflow.cc:2167] TRITONBACKEND_Initialize: tensorflow
I0506 08:06:25.490764 19218 tensorflow.cc:2180] Triton TRITONBACKEND API version: 1.4
I0506 08:06:25.490794 19218 tensorflow.cc:2186] ‘tensorflow’ TRITONBACKEND API version: 1.4
I0506 08:06:25.490817 19218 tensorflow.cc:2207] backend configuration:
{“cmdline”:{“version”:“1”}}
I0506 08:06:26.478012 19218 onnxruntime.cc:1971] TRITONBACKEND_Initialize: onnxruntime
I0506 08:06:26.478095 19218 onnxruntime.cc:1984] Triton TRITONBACKEND API version: 1.4
I0506 08:06:26.478125 19218 onnxruntime.cc:1990] ‘onnxruntime’ TRITONBACKEND API version: 1.4
I0506 08:06:26.858306 19218 pinned_memory_manager.cc:240] Pinned memory pool is created at ‘0x100c60000’ with size 268435456
I0506 08:06:26.858552 19218 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0506 08:06:26.965595 19218 tritonserver.cc:1718]
±---------------------------------±-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option | Value |
±---------------------------------±-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.11.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data |
| | statistics |
| model_repository_path[0] | qa/custom_models/model_repo |
| model_control_mode | MODE_NONE |
| strict_model_config | 1 |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| min_supported_compute_capability | 5.3 |
| strict_readiness | 1 |
| exit_timeout | 30 |
±---------------------------------±-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

I0506 08:06:26.965737 19218 server.cc:231] No server context available. Exiting immediately.
error: creating server: Internal - failed to stat file qa/custom_models/model_repo

I mean I am unable to load the models to the server and get the server running for inference.

Please follow GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton and double check. Thanks.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.