Triton - unable to get number of cuda devices

We have a crucial field event coming up to test an IR camera + inference and have had issues with NVIDIA triton via nvidia-docker picking up a GPU. The crux of the problem:

E0513 02:52:51.506585 1 model_repository_manager.cc:1682] unable to get number of CUDA devices: unknown error

The GPU is a Pascal (1060)

(v1) clarifai@clarifai-Predator-G5-793:~/work/clarifai$ docker run --gpus all -it nvcr.io/nvidia/tritonserver:21.02-py3 /bin/bash -l=============================
== Triton Inference Server ==
=============================NVIDIA Release 21.02 (build 20174689)Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.Various files include modifications (c) NVIDIA CORPORATION. All rights reserved.This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
ERROR: No supported GPU(s) detected to run this containerroot@eb76fc703ebe:/opt/tritonserver# nvidia-smi
Thu May 13 03:17:18 2021
±----------------------------------------------------------------------------+
| NVIDIA-SMI 460.73.01 Driver Version: 460.73.01 CUDA Version: 11.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 1060 Off | 00000000:01:00.0 On | N/A |
| N/A 40C P8 6W / N/A | 484MiB / 6070MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
±----------------------------------------------------------------------------+