Hi everyone, we have a Kubernetes based platform that allows our users to supply code plus some configuration to run machine learning workloads (think CI/CD). Based on the configuration, we dynamically build Docker images containing the right dependencies. I’m now expanding this for deep learning usage on GPUs. Outside of possible conflicts between different versions, what would be the best way of structuring the Docker build process? Should I start with the Python images and install CUDA + cuDNN on top of this? Should I try and do it the other way (I feel like this might be more complex)? I don’t have any experience yet with Docker and CUDA. Thank you!
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Installing CUDA driver on machine with no GPU | 1 | 587 | February 29, 2020 | |
NVIDIA Docker: GPU Server Application Deployment Made Easy | 1 | 3476 | May 15, 2024 | |
Building docker image in CI with CUDA runnable on Jetson | 6 | 1174 | July 22, 2022 | |
Different Cuda versions how to work in a single A40 GPU using different docker images | 1 | 1622 | December 1, 2023 | |
Docker image for OpenCV + Python | 3 | 2988 | November 21, 2020 | |
CUDA Initialization in cuda docker container | 0 | 2123 | July 28, 2022 | |
Official Docker image for CUDA? | 8 | 9688 | November 11, 2015 | |
I can't use CUDA dnn on Jetson nano + python | 11 | 3862 | October 18, 2021 | |
Guidance - How to Create a Jetson Image with Cuda Dev Headers | 2 | 370 | September 7, 2023 | |
Docker images for Vulkan SDK development with NVIDIA CUDA runtime support. | 1 | 2462 | July 14, 2019 |