Hi everyone, we have a Kubernetes based platform that allows our users to supply code plus some configuration to run machine learning workloads (think CI/CD). Based on the configuration, we dynamically build Docker images containing the right dependencies. I’m now expanding this for deep learning usage on GPUs. Outside of possible conflicts between different versions, what would be the best way of structuring the Docker build process? Should I start with the Python images and install CUDA + cuDNN on top of this? Should I try and do it the other way (I feel like this might be more complex)? I don’t have any experience yet with Docker and CUDA. Thank you!