• Hardware Platform (Jetson / GPU) Jetson Orin NX
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.1
Hello, I’m trying to execute a triton infer server with the docker image with the following command
docker run --rm -it --runtime nvidia --gpus=1 -v ./triton:/models nvcr.io/nvidia/tritonserver:23.01-py3 tritonserver --model-repository=/models
However the server fails to start, see attached logs
server.log (12.7 KB)
please advise how to run on the specified platform