Different Cuda versions how to work in a single A40 GPU using different docker images

I have a A40 NVIDIA GPU on a server, installing OS and Kubernetes over that I created different Docker images and run different Cuda version and different tensorflow how to implement please give detailed solutions.

For just docker:

  1. install the latest driver for your GPU in the base machine, not in any container.
  2. make sure you are using a recent version of docker, 19.03ce or newer
  3. use the nvidia container toolkit
  4. Install the CUDA toolkit version of choice in the container (do not install NVIDIA GPU driver in any docker container)
  5. When launching containers, specify --gps=... switch (e.g. --gpus=all)
  6. Profit!

To setup Kubernetes, I would use the cloud native stack.