Tritonserver:220-03 on jetson nano with jetpack 4.6.2 doesn't find CUDA compatible device

Fresh install using the sdkmanager - with jetpack 4.6.2 and all options available.
clone the tritonserver 2.19.0 branch that is compatible with titonserver 22.02-py3 - both latest compatible version with jetpack 4.6.1 and the jetson nano.

When I start the container with:
sudo docker run --runtime nvidia --rm --net=host --ipc=host --shm-size=1g -v $(pwd)/server/docs/examples/model_repository:/models nvcr.io/nvidia/tritonserver:22.02-py3 tritonserver --model-repository=/models

I get:
Warning: [Torch-TensorRT] - Unable to read CUDA capable devices. Return Status:999

docker info " grep nvidia
returns:
nvidia runc io.containerd.runc.v2 …etc

Thanks

Please check this thread to see if can help:
Triton Installation on Jetson Nano - Jetson & Embedded Systems / Jetson Nano - NVIDIA Developer Forums

Hi,

The container you used is built for the desktop users.

Please use the deepstream-l4t:6.1-triton that has Triton server pre-installed.
Or following the below document to setup Triton on the Jetson:

(The steps for JetPack 4.6.2 and 4.6.1 should be the same)

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.