Cuda compatibility not working as expected

Hi,
I have a host system with a Tesla K80 GPU and 470.182.03 drivers installed. According to the CUDA Application Compatibility Support Matrix in the cuda-compatiblity documentation, I would expect that I can use the nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04 image, which uses the compat package. However, when I run tensorflow inside the container I see the following error:

2023-06-28 15:21:50.293321: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-06-28 15:21:54.252470: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:266] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2023-06-28 15:21:54.252559: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:168] retrieving CUDA diagnostic information for host: b6728f6e8f33
2023-06-28 15:21:54.252596: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:175] hostname: b6728f6e8f33
2023-06-28 15:21:54.252754: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:199] libcuda reported version is: 520.61.5
2023-06-28 15:21:54.252822: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:203] kernel reported version is: 470.182.3
2023-06-28 15:21:54.252841: E tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:312] kernel version 470.182.3 does not match DSO version 520.61.5 – cannot find working devices in this configuration

Am I missing something?