Issues Using M10 for Training

I passed one of my GPUs through to a ubuntu 20.04 VM via Proxmox.
Installed latest nvidia driver on host and guest.
Installed nvidia runtime container.
Created Docker with base of: FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu20.04

Ran training script.

Got this error:

023-01-05 14:09:36.390622: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero

2023-01-05 14:09:36.390766: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:222] Using CUDA malloc Async allocator for GPU: 0

2023-01-05 14:09:36.390889: F tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc:160] TF_GPU_ALLOCATOR=cuda_malloc_async isn't currently supported on GPU id 0: Possible causes: device not supported (request SM60+), driver too old, OS not supported, CUDA version too old(request CUDA11.2+).

run.sh: line 1: 32 Aborted (core dumped) python3 run.py
Output of nvidia-smi:

Thu Jan 5 14:10:18 2023

+-----------------------------------------------------------------------------+

| NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 |

|-------------------------------+----------------------+----------------------+

| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |

| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |

| | | MIG M. |

|===============================+======================+======================|

| 0 Tesla M10 On | 00000000:01:00.0 Off | N/A |

| N/A 46C P8 8W / 53W | 0MiB / 8192MiB | 0% Default |

| | | N/A |

+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+

| Processes: |

| GPU GI CI PID Type Process name GPU Memory |

| ID ID Usage |

|=============================================================================|

| No running processes found |

+-----------------------------------------------------------------------------+

Anyone in the know? Could use a hint here.

Unfortunately it looks like your M10 is too old:

The M10 is SM50.