Fresh install using the sdkmanager - with jetpack 4.6.2 and all options available.
clone the tritonserver 2.19.0 branch that is compatible with titonserver 22.02-py3 - both latest compatible version with jetpack 4.6.1 and the jetson nano.
When I start the container with:
sudo docker run --runtime nvidia --rm --net=host --ipc=host --shm-size=1g -v $(pwd)/server/docs/examples/model_repository:/models nvcr.io/nvidia/tritonserver:22.02-py3 tritonserver --model-repository=/models
I get:
Warning: [Torch-TensorRT] - Unable to read CUDA capable devices. Return Status:999
docker info " grep nvidia
returns:
nvidia runc io.containerd.runc.v2 …etc